Dec 13 03:07:43 localhost kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec 13 03:07:43 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec 13 03:07:43 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 03:07:43 localhost kernel: BIOS-provided physical RAM map:
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec 13 03:07:43 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec 13 03:07:43 localhost kernel: NX (Execute Disable) protection: active
Dec 13 03:07:43 localhost kernel: APIC: Static calls initialized
Dec 13 03:07:43 localhost kernel: SMBIOS 2.8 present.
Dec 13 03:07:43 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec 13 03:07:43 localhost kernel: Hypervisor detected: KVM
Dec 13 03:07:43 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec 13 03:07:43 localhost kernel: kvm-clock: using sched offset of 3269299905 cycles
Dec 13 03:07:43 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec 13 03:07:43 localhost kernel: tsc: Detected 2799.998 MHz processor
Dec 13 03:07:43 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Dec 13 03:07:43 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Dec 13 03:07:43 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec 13 03:07:43 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec 13 03:07:43 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec 13 03:07:43 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec 13 03:07:43 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec 13 03:07:43 localhost kernel: Using GB pages for direct mapping
Dec 13 03:07:43 localhost kernel: RAMDISK: [mem 0x2d46a000-0x32a2cfff]
Dec 13 03:07:43 localhost kernel: ACPI: Early table checksum verification disabled
Dec 13 03:07:43 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec 13 03:07:43 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:07:43 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:07:43 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:07:43 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec 13 03:07:43 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:07:43 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec 13 03:07:43 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec 13 03:07:43 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec 13 03:07:43 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec 13 03:07:43 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec 13 03:07:43 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec 13 03:07:43 localhost kernel: No NUMA configuration found
Dec 13 03:07:43 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec 13 03:07:43 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Dec 13 03:07:43 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec 13 03:07:43 localhost kernel: Zone ranges:
Dec 13 03:07:43 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec 13 03:07:43 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec 13 03:07:43 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 03:07:43 localhost kernel:   Device   empty
Dec 13 03:07:43 localhost kernel: Movable zone start for each node
Dec 13 03:07:43 localhost kernel: Early memory node ranges
Dec 13 03:07:43 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec 13 03:07:43 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec 13 03:07:43 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec 13 03:07:43 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec 13 03:07:43 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec 13 03:07:43 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec 13 03:07:43 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec 13 03:07:43 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Dec 13 03:07:43 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec 13 03:07:43 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec 13 03:07:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec 13 03:07:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec 13 03:07:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec 13 03:07:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec 13 03:07:43 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec 13 03:07:43 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec 13 03:07:43 localhost kernel: TSC deadline timer available
Dec 13 03:07:43 localhost kernel: CPU topo: Max. logical packages:   8
Dec 13 03:07:43 localhost kernel: CPU topo: Max. logical dies:       8
Dec 13 03:07:43 localhost kernel: CPU topo: Max. dies per package:   1
Dec 13 03:07:43 localhost kernel: CPU topo: Max. threads per core:   1
Dec 13 03:07:43 localhost kernel: CPU topo: Num. cores per package:     1
Dec 13 03:07:43 localhost kernel: CPU topo: Num. threads per package:   1
Dec 13 03:07:43 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec 13 03:07:43 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec 13 03:07:43 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec 13 03:07:43 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec 13 03:07:43 localhost kernel: Booting paravirtualized kernel on KVM
Dec 13 03:07:43 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec 13 03:07:43 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec 13 03:07:43 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec 13 03:07:43 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Dec 13 03:07:43 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Dec 13 03:07:43 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Dec 13 03:07:43 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 03:07:43 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec 13 03:07:43 localhost kernel: random: crng init done
Dec 13 03:07:43 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec 13 03:07:43 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec 13 03:07:43 localhost kernel: Fallback order for Node 0: 0 
Dec 13 03:07:43 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec 13 03:07:43 localhost kernel: Policy zone: Normal
Dec 13 03:07:43 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec 13 03:07:43 localhost kernel: software IO TLB: area num 8.
Dec 13 03:07:43 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec 13 03:07:43 localhost kernel: ftrace: allocating 49357 entries in 193 pages
Dec 13 03:07:43 localhost kernel: ftrace: allocated 193 pages with 3 groups
Dec 13 03:07:43 localhost kernel: Dynamic Preempt: voluntary
Dec 13 03:07:43 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Dec 13 03:07:43 localhost kernel: rcu:         RCU event tracing is enabled.
Dec 13 03:07:43 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec 13 03:07:43 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Dec 13 03:07:43 localhost kernel:         Rude variant of Tasks RCU enabled.
Dec 13 03:07:43 localhost kernel:         Tracing variant of Tasks RCU enabled.
Dec 13 03:07:43 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec 13 03:07:43 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec 13 03:07:43 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 03:07:43 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 03:07:43 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec 13 03:07:43 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec 13 03:07:43 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec 13 03:07:43 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec 13 03:07:43 localhost kernel: Console: colour VGA+ 80x25
Dec 13 03:07:43 localhost kernel: printk: console [ttyS0] enabled
Dec 13 03:07:43 localhost kernel: ACPI: Core revision 20230331
Dec 13 03:07:43 localhost kernel: APIC: Switch to symmetric I/O mode setup
Dec 13 03:07:43 localhost kernel: x2apic enabled
Dec 13 03:07:43 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Dec 13 03:07:43 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec 13 03:07:43 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec 13 03:07:43 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec 13 03:07:43 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec 13 03:07:43 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec 13 03:07:43 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec 13 03:07:43 localhost kernel: Spectre V2 : Mitigation: Retpolines
Dec 13 03:07:43 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec 13 03:07:43 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec 13 03:07:43 localhost kernel: RETBleed: Mitigation: untrained return thunk
Dec 13 03:07:43 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec 13 03:07:43 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec 13 03:07:43 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec 13 03:07:43 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec 13 03:07:43 localhost kernel: x86/bugs: return thunk changed
Dec 13 03:07:43 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec 13 03:07:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec 13 03:07:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec 13 03:07:43 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec 13 03:07:43 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec 13 03:07:43 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec 13 03:07:43 localhost kernel: Freeing SMP alternatives memory: 40K
Dec 13 03:07:43 localhost kernel: pid_max: default: 32768 minimum: 301
Dec 13 03:07:43 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec 13 03:07:43 localhost kernel: landlock: Up and running.
Dec 13 03:07:43 localhost kernel: Yama: becoming mindful.
Dec 13 03:07:43 localhost kernel: SELinux:  Initializing.
Dec 13 03:07:43 localhost kernel: LSM support for eBPF active
Dec 13 03:07:43 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 03:07:43 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec 13 03:07:43 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec 13 03:07:43 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec 13 03:07:43 localhost kernel: ... version:                0
Dec 13 03:07:43 localhost kernel: ... bit width:              48
Dec 13 03:07:43 localhost kernel: ... generic registers:      6
Dec 13 03:07:43 localhost kernel: ... value mask:             0000ffffffffffff
Dec 13 03:07:43 localhost kernel: ... max period:             00007fffffffffff
Dec 13 03:07:43 localhost kernel: ... fixed-purpose events:   0
Dec 13 03:07:43 localhost kernel: ... event mask:             000000000000003f
Dec 13 03:07:43 localhost kernel: signal: max sigframe size: 1776
Dec 13 03:07:43 localhost kernel: rcu: Hierarchical SRCU implementation.
Dec 13 03:07:43 localhost kernel: rcu:         Max phase no-delay instances is 400.
Dec 13 03:07:43 localhost kernel: smp: Bringing up secondary CPUs ...
Dec 13 03:07:43 localhost kernel: smpboot: x86: Booting SMP configuration:
Dec 13 03:07:43 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec 13 03:07:43 localhost kernel: smp: Brought up 1 node, 8 CPUs
Dec 13 03:07:43 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec 13 03:07:43 localhost kernel: node 0 deferred pages initialised in 8ms
Dec 13 03:07:43 localhost kernel: Memory: 7763916K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 618228K reserved, 0K cma-reserved)
Dec 13 03:07:43 localhost kernel: devtmpfs: initialized
Dec 13 03:07:43 localhost kernel: x86/mm: Memory block size: 128MB
Dec 13 03:07:43 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec 13 03:07:43 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec 13 03:07:43 localhost kernel: pinctrl core: initialized pinctrl subsystem
Dec 13 03:07:43 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec 13 03:07:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec 13 03:07:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec 13 03:07:43 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec 13 03:07:43 localhost kernel: audit: initializing netlink subsys (disabled)
Dec 13 03:07:43 localhost kernel: audit: type=2000 audit(1765595261.863:1): state=initialized audit_enabled=0 res=1
Dec 13 03:07:43 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec 13 03:07:43 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec 13 03:07:43 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Dec 13 03:07:43 localhost kernel: cpuidle: using governor menu
Dec 13 03:07:43 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec 13 03:07:43 localhost kernel: PCI: Using configuration type 1 for base access
Dec 13 03:07:43 localhost kernel: PCI: Using configuration type 1 for extended access
Dec 13 03:07:43 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec 13 03:07:43 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec 13 03:07:43 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec 13 03:07:43 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec 13 03:07:43 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec 13 03:07:43 localhost kernel: Demotion targets for Node 0: null
Dec 13 03:07:43 localhost kernel: cryptd: max_cpu_qlen set to 1000
Dec 13 03:07:43 localhost kernel: ACPI: Added _OSI(Module Device)
Dec 13 03:07:43 localhost kernel: ACPI: Added _OSI(Processor Device)
Dec 13 03:07:43 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec 13 03:07:43 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec 13 03:07:43 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec 13 03:07:43 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec 13 03:07:43 localhost kernel: ACPI: Interpreter enabled
Dec 13 03:07:43 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec 13 03:07:43 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Dec 13 03:07:43 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec 13 03:07:43 localhost kernel: PCI: Using E820 reservations for host bridge windows
Dec 13 03:07:43 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec 13 03:07:43 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec 13 03:07:43 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [3] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [4] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [5] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [6] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [7] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [8] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [9] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [10] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [11] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [12] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [13] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [14] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [15] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [16] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [17] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [18] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [19] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [20] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [21] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [22] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [23] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [24] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [25] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [26] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [27] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [28] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [29] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [30] registered
Dec 13 03:07:43 localhost kernel: acpiphp: Slot [31] registered
Dec 13 03:07:43 localhost kernel: PCI host bridge to bus 0000:00
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec 13 03:07:43 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec 13 03:07:43 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec 13 03:07:43 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec 13 03:07:43 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec 13 03:07:43 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec 13 03:07:43 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec 13 03:07:43 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec 13 03:07:43 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec 13 03:07:43 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec 13 03:07:43 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec 13 03:07:43 localhost kernel: iommu: Default domain type: Translated
Dec 13 03:07:43 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec 13 03:07:43 localhost kernel: SCSI subsystem initialized
Dec 13 03:07:43 localhost kernel: ACPI: bus type USB registered
Dec 13 03:07:43 localhost kernel: usbcore: registered new interface driver usbfs
Dec 13 03:07:43 localhost kernel: usbcore: registered new interface driver hub
Dec 13 03:07:43 localhost kernel: usbcore: registered new device driver usb
Dec 13 03:07:43 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Dec 13 03:07:43 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec 13 03:07:43 localhost kernel: PTP clock support registered
Dec 13 03:07:43 localhost kernel: EDAC MC: Ver: 3.0.0
Dec 13 03:07:43 localhost kernel: NetLabel: Initializing
Dec 13 03:07:43 localhost kernel: NetLabel:  domain hash size = 128
Dec 13 03:07:43 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec 13 03:07:43 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Dec 13 03:07:43 localhost kernel: PCI: Using ACPI for IRQ routing
Dec 13 03:07:43 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Dec 13 03:07:43 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Dec 13 03:07:43 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec 13 03:07:43 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec 13 03:07:43 localhost kernel: vgaarb: loaded
Dec 13 03:07:43 localhost kernel: clocksource: Switched to clocksource kvm-clock
Dec 13 03:07:43 localhost kernel: VFS: Disk quotas dquot_6.6.0
Dec 13 03:07:43 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec 13 03:07:43 localhost kernel: pnp: PnP ACPI init
Dec 13 03:07:43 localhost kernel: pnp 00:03: [dma 2]
Dec 13 03:07:43 localhost kernel: pnp: PnP ACPI: found 5 devices
Dec 13 03:07:43 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec 13 03:07:43 localhost kernel: NET: Registered PF_INET protocol family
Dec 13 03:07:43 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec 13 03:07:43 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec 13 03:07:43 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec 13 03:07:43 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec 13 03:07:43 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec 13 03:07:43 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec 13 03:07:43 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec 13 03:07:43 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 03:07:43 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec 13 03:07:43 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec 13 03:07:43 localhost kernel: NET: Registered PF_XDP protocol family
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec 13 03:07:43 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec 13 03:07:43 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec 13 03:07:43 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec 13 03:07:43 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73013 usecs
Dec 13 03:07:43 localhost kernel: PCI: CLS 0 bytes, default 64
Dec 13 03:07:43 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec 13 03:07:43 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec 13 03:07:43 localhost kernel: ACPI: bus type thunderbolt registered
Dec 13 03:07:43 localhost kernel: Trying to unpack rootfs image as initramfs...
Dec 13 03:07:43 localhost kernel: Initialise system trusted keyrings
Dec 13 03:07:43 localhost kernel: Key type blacklist registered
Dec 13 03:07:43 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec 13 03:07:43 localhost kernel: zbud: loaded
Dec 13 03:07:43 localhost kernel: integrity: Platform Keyring initialized
Dec 13 03:07:43 localhost kernel: integrity: Machine keyring initialized
Dec 13 03:07:43 localhost kernel: Freeing initrd memory: 87820K
Dec 13 03:07:43 localhost kernel: NET: Registered PF_ALG protocol family
Dec 13 03:07:43 localhost kernel: xor: automatically using best checksumming function   avx       
Dec 13 03:07:43 localhost kernel: Key type asymmetric registered
Dec 13 03:07:43 localhost kernel: Asymmetric key parser 'x509' registered
Dec 13 03:07:43 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec 13 03:07:43 localhost kernel: io scheduler mq-deadline registered
Dec 13 03:07:43 localhost kernel: io scheduler kyber registered
Dec 13 03:07:43 localhost kernel: io scheduler bfq registered
Dec 13 03:07:43 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec 13 03:07:43 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec 13 03:07:43 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec 13 03:07:43 localhost kernel: ACPI: button: Power Button [PWRF]
Dec 13 03:07:43 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec 13 03:07:43 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec 13 03:07:43 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec 13 03:07:43 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec 13 03:07:43 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec 13 03:07:43 localhost kernel: Non-volatile memory driver v1.3
Dec 13 03:07:43 localhost kernel: rdac: device handler registered
Dec 13 03:07:43 localhost kernel: hp_sw: device handler registered
Dec 13 03:07:43 localhost kernel: emc: device handler registered
Dec 13 03:07:43 localhost kernel: alua: device handler registered
Dec 13 03:07:43 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec 13 03:07:43 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec 13 03:07:43 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec 13 03:07:43 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec 13 03:07:43 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec 13 03:07:43 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec 13 03:07:43 localhost kernel: usb usb1: Product: UHCI Host Controller
Dec 13 03:07:43 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec 13 03:07:43 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec 13 03:07:43 localhost kernel: hub 1-0:1.0: USB hub found
Dec 13 03:07:43 localhost kernel: hub 1-0:1.0: 2 ports detected
Dec 13 03:07:43 localhost kernel: usbcore: registered new interface driver usbserial_generic
Dec 13 03:07:43 localhost kernel: usbserial: USB Serial support registered for generic
Dec 13 03:07:43 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec 13 03:07:43 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec 13 03:07:43 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec 13 03:07:43 localhost kernel: mousedev: PS/2 mouse device common for all mice
Dec 13 03:07:43 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Dec 13 03:07:43 localhost kernel: rtc_cmos 00:04: registered as rtc0
Dec 13 03:07:43 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec 13 03:07:43 localhost kernel: rtc_cmos 00:04: setting system clock to 2025-12-13T03:07:42 UTC (1765595262)
Dec 13 03:07:43 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec 13 03:07:43 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec 13 03:07:43 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Dec 13 03:07:43 localhost kernel: usbcore: registered new interface driver usbhid
Dec 13 03:07:43 localhost kernel: usbhid: USB HID core driver
Dec 13 03:07:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec 13 03:07:43 localhost kernel: drop_monitor: Initializing network drop monitor service
Dec 13 03:07:43 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec 13 03:07:43 localhost kernel: Initializing XFRM netlink socket
Dec 13 03:07:43 localhost kernel: NET: Registered PF_INET6 protocol family
Dec 13 03:07:43 localhost kernel: Segment Routing with IPv6
Dec 13 03:07:43 localhost kernel: NET: Registered PF_PACKET protocol family
Dec 13 03:07:43 localhost kernel: mpls_gso: MPLS GSO support
Dec 13 03:07:43 localhost kernel: IPI shorthand broadcast: enabled
Dec 13 03:07:43 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Dec 13 03:07:43 localhost kernel: AES CTR mode by8 optimization enabled
Dec 13 03:07:43 localhost kernel: sched_clock: Marking stable (1272003950, 154623715)->(1509535819, -82908154)
Dec 13 03:07:43 localhost kernel: registered taskstats version 1
Dec 13 03:07:43 localhost kernel: Loading compiled-in X.509 certificates
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec 13 03:07:43 localhost kernel: Demotion targets for Node 0: null
Dec 13 03:07:43 localhost kernel: page_owner is disabled
Dec 13 03:07:43 localhost kernel: Key type .fscrypt registered
Dec 13 03:07:43 localhost kernel: Key type fscrypt-provisioning registered
Dec 13 03:07:43 localhost kernel: Key type big_key registered
Dec 13 03:07:43 localhost kernel: Key type encrypted registered
Dec 13 03:07:43 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Dec 13 03:07:43 localhost kernel: Loading compiled-in module X.509 certificates
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec 13 03:07:43 localhost kernel: ima: Allocated hash algorithm: sha256
Dec 13 03:07:43 localhost kernel: ima: No architecture policies found
Dec 13 03:07:43 localhost kernel: evm: Initialising EVM extended attributes:
Dec 13 03:07:43 localhost kernel: evm: security.selinux
Dec 13 03:07:43 localhost kernel: evm: security.SMACK64 (disabled)
Dec 13 03:07:43 localhost kernel: evm: security.SMACK64EXEC (disabled)
Dec 13 03:07:43 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec 13 03:07:43 localhost kernel: evm: security.SMACK64MMAP (disabled)
Dec 13 03:07:43 localhost kernel: evm: security.apparmor (disabled)
Dec 13 03:07:43 localhost kernel: evm: security.ima
Dec 13 03:07:43 localhost kernel: evm: security.capability
Dec 13 03:07:43 localhost kernel: evm: HMAC attrs: 0x1
Dec 13 03:07:43 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec 13 03:07:43 localhost kernel: Running certificate verification RSA selftest
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec 13 03:07:43 localhost kernel: Running certificate verification ECDSA selftest
Dec 13 03:07:43 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec 13 03:07:43 localhost kernel: clk: Disabling unused clocks
Dec 13 03:07:43 localhost kernel: Freeing unused decrypted memory: 2028K
Dec 13 03:07:43 localhost kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec 13 03:07:43 localhost kernel: Write protecting the kernel read-only data: 30720k
Dec 13 03:07:43 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec 13 03:07:43 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec 13 03:07:43 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec 13 03:07:43 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Dec 13 03:07:43 localhost kernel: usb 1-1: Manufacturer: QEMU
Dec 13 03:07:43 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec 13 03:07:43 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec 13 03:07:43 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec 13 03:07:43 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec 13 03:07:43 localhost kernel: Run /init as init process
Dec 13 03:07:43 localhost kernel:   with arguments:
Dec 13 03:07:43 localhost kernel:     /init
Dec 13 03:07:43 localhost kernel:   with environment:
Dec 13 03:07:43 localhost kernel:     HOME=/
Dec 13 03:07:43 localhost kernel:     TERM=linux
Dec 13 03:07:43 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64
Dec 13 03:07:43 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 03:07:43 localhost systemd[1]: Detected virtualization kvm.
Dec 13 03:07:43 localhost systemd[1]: Detected architecture x86-64.
Dec 13 03:07:43 localhost systemd[1]: Running in initrd.
Dec 13 03:07:43 localhost systemd[1]: No hostname configured, using default hostname.
Dec 13 03:07:43 localhost systemd[1]: Hostname set to <localhost>.
Dec 13 03:07:43 localhost systemd[1]: Initializing machine ID from VM UUID.
Dec 13 03:07:43 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Dec 13 03:07:43 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 03:07:43 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 13 03:07:43 localhost systemd[1]: Reached target Initrd /usr File System.
Dec 13 03:07:43 localhost systemd[1]: Reached target Local File Systems.
Dec 13 03:07:43 localhost systemd[1]: Reached target Path Units.
Dec 13 03:07:43 localhost systemd[1]: Reached target Slice Units.
Dec 13 03:07:43 localhost systemd[1]: Reached target Swaps.
Dec 13 03:07:43 localhost systemd[1]: Reached target Timer Units.
Dec 13 03:07:43 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 03:07:43 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Dec 13 03:07:43 localhost systemd[1]: Listening on Journal Socket.
Dec 13 03:07:43 localhost systemd[1]: Listening on udev Control Socket.
Dec 13 03:07:43 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 13 03:07:43 localhost systemd[1]: Reached target Socket Units.
Dec 13 03:07:43 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 13 03:07:43 localhost systemd[1]: Starting Journal Service...
Dec 13 03:07:43 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 03:07:43 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 13 03:07:43 localhost systemd[1]: Starting Create System Users...
Dec 13 03:07:43 localhost systemd[1]: Starting Setup Virtual Console...
Dec 13 03:07:43 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 03:07:43 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 13 03:07:43 localhost systemd[1]: Finished Create System Users.
Dec 13 03:07:43 localhost systemd-journald[304]: Journal started
Dec 13 03:07:43 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/90cce6d2aa094bc1a87efb31e9108c78) is 8.0M, max 153.6M, 145.6M free.
Dec 13 03:07:43 localhost systemd-sysusers[309]: Creating group 'users' with GID 100.
Dec 13 03:07:43 localhost systemd-sysusers[309]: Creating group 'dbus' with GID 81.
Dec 13 03:07:43 localhost systemd-sysusers[309]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec 13 03:07:43 localhost systemd[1]: Started Journal Service.
Dec 13 03:07:43 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 03:07:43 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 03:07:43 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 03:07:43 localhost systemd[1]: Finished Setup Virtual Console.
Dec 13 03:07:43 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec 13 03:07:43 localhost systemd[1]: Starting dracut cmdline hook...
Dec 13 03:07:43 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 03:07:43 localhost dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Dec 13 03:07:43 localhost dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=cbdedf45-ed1d-4952-82a8-33a12c0ba266 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec 13 03:07:43 localhost systemd[1]: Finished dracut cmdline hook.
Dec 13 03:07:43 localhost systemd[1]: Starting dracut pre-udev hook...
Dec 13 03:07:43 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec 13 03:07:43 localhost kernel: device-mapper: uevent: version 1.0.3
Dec 13 03:07:43 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec 13 03:07:43 localhost kernel: RPC: Registered named UNIX socket transport module.
Dec 13 03:07:43 localhost kernel: RPC: Registered udp transport module.
Dec 13 03:07:43 localhost kernel: RPC: Registered tcp transport module.
Dec 13 03:07:43 localhost kernel: RPC: Registered tcp-with-tls transport module.
Dec 13 03:07:43 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec 13 03:07:43 localhost rpc.statd[441]: Version 2.5.4 starting
Dec 13 03:07:43 localhost rpc.statd[441]: Initializing NSM state
Dec 13 03:07:43 localhost rpc.idmapd[446]: Setting log level to 0
Dec 13 03:07:43 localhost systemd[1]: Finished dracut pre-udev hook.
Dec 13 03:07:43 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 03:07:43 localhost systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 03:07:43 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 03:07:43 localhost systemd[1]: Starting dracut pre-trigger hook...
Dec 13 03:07:43 localhost systemd[1]: Finished dracut pre-trigger hook.
Dec 13 03:07:43 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 13 03:07:43 localhost systemd[1]: Created slice Slice /system/modprobe.
Dec 13 03:07:43 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 03:07:43 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 13 03:07:43 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 03:07:43 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 03:07:43 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 03:07:43 localhost systemd[1]: Reached target Network.
Dec 13 03:07:43 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec 13 03:07:43 localhost systemd[1]: Starting dracut initqueue hook...
Dec 13 03:07:43 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec 13 03:07:43 localhost kernel: libata version 3.00 loaded.
Dec 13 03:07:43 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Dec 13 03:07:43 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec 13 03:07:43 localhost kernel: scsi host0: ata_piix
Dec 13 03:07:43 localhost kernel: scsi host1: ata_piix
Dec 13 03:07:43 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec 13 03:07:43 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec 13 03:07:43 localhost kernel:  vda: vda1
Dec 13 03:07:44 localhost kernel: ata1: found unknown device (class 0)
Dec 13 03:07:44 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec 13 03:07:44 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec 13 03:07:44 localhost systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:07:44 localhost systemd[1]: Found device /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 03:07:44 localhost systemd[1]: Reached target Initrd Root Device.
Dec 13 03:07:44 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec 13 03:07:44 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec 13 03:07:44 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec 13 03:07:44 localhost systemd[1]: Mounting Kernel Configuration File System...
Dec 13 03:07:44 localhost systemd[1]: Mounted Kernel Configuration File System.
Dec 13 03:07:44 localhost systemd[1]: Reached target System Initialization.
Dec 13 03:07:44 localhost systemd[1]: Reached target Basic System.
Dec 13 03:07:44 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Dec 13 03:07:44 localhost systemd[1]: Finished dracut initqueue hook.
Dec 13 03:07:44 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 03:07:44 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Dec 13 03:07:44 localhost systemd[1]: Reached target Remote File Systems.
Dec 13 03:07:44 localhost systemd[1]: Starting dracut pre-mount hook...
Dec 13 03:07:44 localhost systemd[1]: Finished dracut pre-mount hook.
Dec 13 03:07:44 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266...
Dec 13 03:07:44 localhost systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Dec 13 03:07:44 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266.
Dec 13 03:07:44 localhost systemd[1]: Mounting /sysroot...
Dec 13 03:07:44 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec 13 03:07:44 localhost kernel: XFS (vda1): Mounting V5 Filesystem cbdedf45-ed1d-4952-82a8-33a12c0ba266
Dec 13 03:07:44 localhost kernel: XFS (vda1): Ending clean mount
Dec 13 03:07:44 localhost systemd[1]: Mounted /sysroot.
Dec 13 03:07:44 localhost systemd[1]: Reached target Initrd Root File System.
Dec 13 03:07:44 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec 13 03:07:44 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec 13 03:07:44 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec 13 03:07:44 localhost systemd[1]: Reached target Initrd File Systems.
Dec 13 03:07:44 localhost systemd[1]: Reached target Initrd Default Target.
Dec 13 03:07:44 localhost systemd[1]: Starting dracut mount hook...
Dec 13 03:07:44 localhost systemd[1]: Finished dracut mount hook.
Dec 13 03:07:44 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec 13 03:07:45 localhost rpc.idmapd[446]: exiting on signal 15
Dec 13 03:07:45 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec 13 03:07:45 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec 13 03:07:45 localhost systemd[1]: Stopped target Network.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Timer Units.
Dec 13 03:07:45 localhost systemd[1]: dbus.socket: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Dec 13 03:07:45 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Initrd Default Target.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Basic System.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Initrd Root Device.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Initrd /usr File System.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Path Units.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Remote File Systems.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Slice Units.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Socket Units.
Dec 13 03:07:45 localhost systemd[1]: Stopped target System Initialization.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Local File Systems.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Swaps.
Dec 13 03:07:45 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut mount hook.
Dec 13 03:07:45 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut pre-mount hook.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Dec 13 03:07:45 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec 13 03:07:45 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut initqueue hook.
Dec 13 03:07:45 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Apply Kernel Variables.
Dec 13 03:07:45 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Dec 13 03:07:45 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Coldplug All udev Devices.
Dec 13 03:07:45 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut pre-trigger hook.
Dec 13 03:07:45 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec 13 03:07:45 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Setup Virtual Console.
Dec 13 03:07:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec 13 03:07:45 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec 13 03:07:45 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Closed udev Control Socket.
Dec 13 03:07:45 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Closed udev Kernel Socket.
Dec 13 03:07:45 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut pre-udev hook.
Dec 13 03:07:45 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped dracut cmdline hook.
Dec 13 03:07:45 localhost systemd[1]: Starting Cleanup udev Database...
Dec 13 03:07:45 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec 13 03:07:45 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Dec 13 03:07:45 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Create System Users.
Dec 13 03:07:45 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Cleanup udev Database.
Dec 13 03:07:45 localhost systemd[1]: Reached target Switch Root.
Dec 13 03:07:45 localhost systemd[1]: Starting Switch Root...
Dec 13 03:07:45 localhost systemd[1]: Switching root.
Dec 13 03:07:45 localhost systemd-journald[304]: Journal stopped
Dec 13 03:07:45 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Dec 13 03:07:45 localhost kernel: audit: type=1404 audit(1765595265.295:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability open_perms=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:07:45 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:07:45 localhost kernel: audit: type=1403 audit(1765595265.439:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec 13 03:07:45 localhost systemd[1]: Successfully loaded SELinux policy in 149.174ms.
Dec 13 03:07:45 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.073ms.
Dec 13 03:07:45 localhost systemd[1]: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec 13 03:07:45 localhost systemd[1]: Detected virtualization kvm.
Dec 13 03:07:45 localhost systemd[1]: Detected architecture x86-64.
Dec 13 03:07:45 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:07:45 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped Switch Root.
Dec 13 03:07:45 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec 13 03:07:45 localhost systemd[1]: Created slice Slice /system/getty.
Dec 13 03:07:45 localhost systemd[1]: Created slice Slice /system/serial-getty.
Dec 13 03:07:45 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Dec 13 03:07:45 localhost systemd[1]: Created slice User and Session Slice.
Dec 13 03:07:45 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Dec 13 03:07:45 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Dec 13 03:07:45 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec 13 03:07:45 localhost systemd[1]: Reached target Local Encrypted Volumes.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Switch Root.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Initrd File Systems.
Dec 13 03:07:45 localhost systemd[1]: Stopped target Initrd Root File System.
Dec 13 03:07:45 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Dec 13 03:07:45 localhost systemd[1]: Reached target Path Units.
Dec 13 03:07:45 localhost systemd[1]: Reached target rpc_pipefs.target.
Dec 13 03:07:45 localhost systemd[1]: Reached target Slice Units.
Dec 13 03:07:45 localhost systemd[1]: Reached target Swaps.
Dec 13 03:07:45 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Dec 13 03:07:45 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Dec 13 03:07:45 localhost systemd[1]: Reached target RPC Port Mapper.
Dec 13 03:07:45 localhost systemd[1]: Listening on Process Core Dump Socket.
Dec 13 03:07:45 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Dec 13 03:07:45 localhost systemd[1]: Listening on udev Control Socket.
Dec 13 03:07:45 localhost systemd[1]: Listening on udev Kernel Socket.
Dec 13 03:07:45 localhost systemd[1]: Mounting Huge Pages File System...
Dec 13 03:07:45 localhost systemd[1]: Mounting POSIX Message Queue File System...
Dec 13 03:07:45 localhost systemd[1]: Mounting Kernel Debug File System...
Dec 13 03:07:45 localhost systemd[1]: Mounting Kernel Trace File System...
Dec 13 03:07:45 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 03:07:45 localhost systemd[1]: Starting Create List of Static Device Nodes...
Dec 13 03:07:45 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 03:07:45 localhost systemd[1]: Starting Load Kernel Module drm...
Dec 13 03:07:45 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Dec 13 03:07:45 localhost systemd[1]: Starting Load Kernel Module fuse...
Dec 13 03:07:45 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec 13 03:07:45 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Stopped File System Check on Root Device.
Dec 13 03:07:45 localhost systemd[1]: Stopped Journal Service.
Dec 13 03:07:45 localhost systemd[1]: Starting Journal Service...
Dec 13 03:07:45 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec 13 03:07:45 localhost systemd[1]: Starting Generate network units from Kernel command line...
Dec 13 03:07:45 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:07:45 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Dec 13 03:07:45 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec 13 03:07:45 localhost systemd[1]: Starting Apply Kernel Variables...
Dec 13 03:07:45 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec 13 03:07:45 localhost kernel: fuse: init (API version 7.37)
Dec 13 03:07:45 localhost systemd[1]: Starting Coldplug All udev Devices...
Dec 13 03:07:45 localhost systemd[1]: Mounted Huge Pages File System.
Dec 13 03:07:45 localhost systemd[1]: Mounted POSIX Message Queue File System.
Dec 13 03:07:45 localhost systemd[1]: Mounted Kernel Debug File System.
Dec 13 03:07:45 localhost systemd[1]: Mounted Kernel Trace File System.
Dec 13 03:07:45 localhost systemd[1]: Finished Create List of Static Device Nodes.
Dec 13 03:07:45 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 03:07:45 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Dec 13 03:07:45 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Load Kernel Module fuse.
Dec 13 03:07:45 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec 13 03:07:45 localhost kernel: ACPI: bus type drm_connector registered
Dec 13 03:07:45 localhost systemd-journald[679]: Journal started
Dec 13 03:07:45 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 03:07:45 localhost systemd[1]: Queued start job for default target Multi-User System.
Dec 13 03:07:45 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Started Journal Service.
Dec 13 03:07:45 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec 13 03:07:45 localhost systemd[1]: Finished Load Kernel Module drm.
Dec 13 03:07:45 localhost systemd[1]: Finished Generate network units from Kernel command line.
Dec 13 03:07:45 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Dec 13 03:07:45 localhost systemd[1]: Finished Apply Kernel Variables.
Dec 13 03:07:46 localhost systemd[1]: Mounting FUSE Control File System...
Dec 13 03:07:46 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 03:07:46 localhost systemd[1]: Starting Rebuild Hardware Database...
Dec 13 03:07:46 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Dec 13 03:07:46 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec 13 03:07:46 localhost systemd[1]: Starting Load/Save OS Random Seed...
Dec 13 03:07:46 localhost systemd[1]: Starting Create System Users...
Dec 13 03:07:46 localhost systemd[1]: Mounted FUSE Control File System.
Dec 13 03:07:46 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/64f1d6692049d8be5e8b216cc203502c) is 8.0M, max 153.6M, 145.6M free.
Dec 13 03:07:46 localhost systemd-journald[679]: Received client request to flush runtime journal.
Dec 13 03:07:46 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Dec 13 03:07:46 localhost systemd[1]: Finished Load/Save OS Random Seed.
Dec 13 03:07:46 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec 13 03:07:46 localhost systemd[1]: Finished Create System Users.
Dec 13 03:07:46 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Dec 13 03:07:46 localhost systemd[1]: Finished Coldplug All udev Devices.
Dec 13 03:07:46 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Dec 13 03:07:46 localhost systemd[1]: Reached target Preparation for Local File Systems.
Dec 13 03:07:46 localhost systemd[1]: Reached target Local File Systems.
Dec 13 03:07:46 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec 13 03:07:46 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec 13 03:07:46 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec 13 03:07:46 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec 13 03:07:46 localhost systemd[1]: Starting Automatic Boot Loader Update...
Dec 13 03:07:46 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec 13 03:07:46 localhost systemd[1]: Starting Create Volatile Files and Directories...
Dec 13 03:07:46 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Dec 13 03:07:46 localhost systemd[1]: Finished Automatic Boot Loader Update.
Dec 13 03:07:46 localhost systemd[1]: Finished Create Volatile Files and Directories.
Dec 13 03:07:46 localhost systemd[1]: Starting Security Auditing Service...
Dec 13 03:07:46 localhost systemd[1]: Starting RPC Bind...
Dec 13 03:07:46 localhost systemd[1]: Starting Rebuild Journal Catalog...
Dec 13 03:07:46 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec 13 03:07:46 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec 13 03:07:46 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec 13 03:07:46 localhost systemd[1]: Finished Rebuild Journal Catalog.
Dec 13 03:07:46 localhost systemd[1]: Started RPC Bind.
Dec 13 03:07:46 localhost augenrules[707]: /sbin/augenrules: No change
Dec 13 03:07:46 localhost augenrules[722]: No rules
Dec 13 03:07:46 localhost augenrules[722]: enabled 1
Dec 13 03:07:46 localhost augenrules[722]: failure 1
Dec 13 03:07:46 localhost augenrules[722]: pid 702
Dec 13 03:07:46 localhost augenrules[722]: rate_limit 0
Dec 13 03:07:46 localhost augenrules[722]: backlog_limit 8192
Dec 13 03:07:46 localhost augenrules[722]: lost 0
Dec 13 03:07:46 localhost augenrules[722]: backlog 3
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time 60000
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 13 03:07:46 localhost augenrules[722]: enabled 1
Dec 13 03:07:46 localhost augenrules[722]: failure 1
Dec 13 03:07:46 localhost augenrules[722]: pid 702
Dec 13 03:07:46 localhost augenrules[722]: rate_limit 0
Dec 13 03:07:46 localhost augenrules[722]: backlog_limit 8192
Dec 13 03:07:46 localhost augenrules[722]: lost 0
Dec 13 03:07:46 localhost augenrules[722]: backlog 0
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time 60000
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 13 03:07:46 localhost augenrules[722]: enabled 1
Dec 13 03:07:46 localhost augenrules[722]: failure 1
Dec 13 03:07:46 localhost augenrules[722]: pid 702
Dec 13 03:07:46 localhost augenrules[722]: rate_limit 0
Dec 13 03:07:46 localhost augenrules[722]: backlog_limit 8192
Dec 13 03:07:46 localhost augenrules[722]: lost 0
Dec 13 03:07:46 localhost augenrules[722]: backlog 3
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time 60000
Dec 13 03:07:46 localhost augenrules[722]: backlog_wait_time_actual 0
Dec 13 03:07:46 localhost systemd[1]: Started Security Auditing Service.
Dec 13 03:07:46 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec 13 03:07:46 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec 13 03:07:46 localhost systemd[1]: Finished Rebuild Hardware Database.
Dec 13 03:07:46 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec 13 03:07:46 localhost systemd[1]: Starting Update is Completed...
Dec 13 03:07:46 localhost systemd[1]: Finished Update is Completed.
Dec 13 03:07:46 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Dec 13 03:07:46 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec 13 03:07:46 localhost systemd[1]: Reached target System Initialization.
Dec 13 03:07:46 localhost systemd[1]: Started dnf makecache --timer.
Dec 13 03:07:46 localhost systemd[1]: Started Daily rotation of log files.
Dec 13 03:07:46 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec 13 03:07:46 localhost systemd[1]: Reached target Timer Units.
Dec 13 03:07:46 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec 13 03:07:46 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec 13 03:07:46 localhost systemd[1]: Reached target Socket Units.
Dec 13 03:07:46 localhost systemd[1]: Starting D-Bus System Message Bus...
Dec 13 03:07:46 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:07:46 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec 13 03:07:46 localhost systemd[1]: Starting Load Kernel Module configfs...
Dec 13 03:07:46 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec 13 03:07:46 localhost systemd[1]: Finished Load Kernel Module configfs.
Dec 13 03:07:46 localhost systemd-udevd[737]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:07:46 localhost systemd[1]: Started D-Bus System Message Bus.
Dec 13 03:07:46 localhost systemd[1]: Reached target Basic System.
Dec 13 03:07:46 localhost dbus-broker-lau[748]: Ready
Dec 13 03:07:46 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec 13 03:07:46 localhost systemd[1]: Starting NTP client/server...
Dec 13 03:07:46 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec 13 03:07:46 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec 13 03:07:46 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec 13 03:07:46 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec 13 03:07:46 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec 13 03:07:46 localhost systemd[1]: Starting IPv4 firewall with iptables...
Dec 13 03:07:46 localhost chronyd[786]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 03:07:46 localhost chronyd[786]: Loaded 0 symmetric keys
Dec 13 03:07:46 localhost chronyd[786]: Using right/UTC timezone to obtain leap second data
Dec 13 03:07:46 localhost chronyd[786]: Loaded seccomp filter (level 2)
Dec 13 03:07:46 localhost systemd[1]: Started irqbalance daemon.
Dec 13 03:07:46 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec 13 03:07:46 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:07:46 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:07:46 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:07:46 localhost systemd[1]: Reached target sshd-keygen.target.
Dec 13 03:07:46 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec 13 03:07:46 localhost systemd[1]: Reached target User and Group Name Lookups.
Dec 13 03:07:46 localhost systemd[1]: Starting User Login Management...
Dec 13 03:07:46 localhost systemd[1]: Started NTP client/server.
Dec 13 03:07:46 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec 13 03:07:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec 13 03:07:47 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec 13 03:07:47 localhost systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 03:07:47 localhost systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 03:07:47 localhost kernel: kvm_amd: TSC scaling supported
Dec 13 03:07:47 localhost kernel: kvm_amd: Nested Virtualization enabled
Dec 13 03:07:47 localhost kernel: kvm_amd: Nested Paging enabled
Dec 13 03:07:47 localhost kernel: kvm_amd: LBR virtualization supported
Dec 13 03:07:47 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec 13 03:07:47 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec 13 03:07:47 localhost systemd-logind[796]: New seat seat0.
Dec 13 03:07:47 localhost systemd[1]: Started User Login Management.
Dec 13 03:07:47 localhost kernel: Console: switching to colour dummy device 80x25
Dec 13 03:07:47 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec 13 03:07:47 localhost kernel: [drm] features: -context_init
Dec 13 03:07:47 localhost kernel: [drm] number of scanouts: 1
Dec 13 03:07:47 localhost kernel: [drm] number of cap sets: 0
Dec 13 03:07:47 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec 13 03:07:47 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec 13 03:07:47 localhost kernel: Console: switching to colour frame buffer device 128x48
Dec 13 03:07:47 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec 13 03:07:47 localhost iptables.init[779]: iptables: Applying firewall rules: [  OK  ]
Dec 13 03:07:47 localhost systemd[1]: Finished IPv4 firewall with iptables.
Dec 13 03:07:47 localhost cloud-init[839]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 13 Dec 2025 03:07:47 +0000. Up 6.01 seconds.
Dec 13 03:07:47 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Dec 13 03:07:47 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Dec 13 03:07:47 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpygtpafb8.mount: Deactivated successfully.
Dec 13 03:07:47 localhost systemd[1]: Starting Hostname Service...
Dec 13 03:07:47 localhost systemd[1]: Started Hostname Service.
Dec 13 03:07:47 np0005557965.novalocal systemd-hostnamed[853]: Hostname set to <np0005557965.novalocal> (static)
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Reached target Preparation for Network.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Starting Network Manager...
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7675] NetworkManager (version 1.54.2-1.el9) is starting... (boot:9ad30d9f-3581-40c3-b7be-ea7e23726ec3)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7681] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7745] manager[0x55b48dcb2000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7781] hostname: hostname: using hostnamed
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7781] hostname: static hostname changed from (none) to "np0005557965.novalocal"
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7785] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7952] manager[0x55b48dcb2000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.7953] manager[0x55b48dcb2000]: rfkill: WWAN hardware radio set enabled
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8002] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8002] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8003] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8004] manager: Networking is enabled by state file
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8007] settings: Loaded settings plugin: keyfile (internal)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8024] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8047] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8063] dhcp: init: Using DHCP client 'internal'
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8066] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8079] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8086] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8094] device (lo): Activation: starting connection 'lo' (1b002e4b-157a-4f55-90b2-cce7be34ec02)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8104] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8106] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8143] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8147] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8149] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8150] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8152] device (eth0): carrier: link connected
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8154] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8160] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8166] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8171] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8172] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8174] manager: NetworkManager state is now CONNECTING
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8175] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8181] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8185] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8224] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8230] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Started Network Manager.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8246] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Reached target Network.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8452] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8454] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8460] device (lo): Activation: successful, device activated.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8469] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8471] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8473] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8475] device (eth0): Activation: successful, device activated.
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8480] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 03:07:47 np0005557965.novalocal NetworkManager[857]: <info>  [1765595267.8482] manager: startup complete
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Reached target NFS client services.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Reached target Remote File Systems.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 13 03:07:47 np0005557965.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 13 Dec 2025 03:07:48 +0000. Up 6.90 seconds.
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.158         | 255.255.255.0 | global | fa:16:3e:5d:1f:6c |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe5d:1f6c/64 |       .       |  link  | fa:16:3e:5d:1f:6c |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Dec 13 03:07:48 np0005557965.novalocal cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: new group: name=cloud-user, GID=1001
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: add 'cloud-user' to group 'adm'
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: add 'cloud-user' to group 'systemd-journal'
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: add 'cloud-user' to shadow group 'adm'
Dec 13 03:07:49 np0005557965.novalocal useradd[986]: add 'cloud-user' to shadow group 'systemd-journal'
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Generating public/private rsa key pair.
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key fingerprint is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: SHA256:UsYtVNIPvw2oP8oaFAMWUv+GgdHBJyaNjj+j5bnTpkM root@np0005557965.novalocal
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key's randomart image is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +---[RSA 3072]----+
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |  ..**..oo.      |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   oo=*o.oo      |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   o.o=o= .=     |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |  . .  O .. +    |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   .  + S.   +   |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    E. o.   . .  |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   = =.  .       |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |  . = oo  o      |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    o*..o. .     |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Generating public/private ecdsa key pair.
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key fingerprint is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: SHA256:tYkvCENMM3vjeeS6Fck8C6+/NtV7qEDJqTIl5/6iH+0 root@np0005557965.novalocal
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key's randomart image is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +---[ECDSA 256]---+
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    +            |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   o +           |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    + o . .      |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |   . o O * o     |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    + * S o.     |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |     B X =. .    |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    o * B..  o   |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |     +.*oo  o .  |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |    .o==Eo.. .   |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Generating public/private ed25519 key pair.
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key fingerprint is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: SHA256:FNfFOw1D1IIUpU7rLfL+aBXRrLtO0mXpEBOXop7kReQ root@np0005557965.novalocal
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: The key's randomart image is:
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +--[ED25519 256]--+
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |        . .o+@==.|
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |         o  ++O.+|
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |        .   =E.O |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |       .   = oB o|
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |        S + =. =o|
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |           = o=o |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |          . +.+o |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |           o.=.  |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: |           o+oo  |
Dec 13 03:07:49 np0005557965.novalocal cloud-init[920]: +----[SHA256]-----+
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Reached target Cloud-config availability.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Reached target Network is Online.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting Crash recovery kernel arming...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting System Logging Service...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting OpenSSH server daemon...
Dec 13 03:07:49 np0005557965.novalocal sm-notify[1003]: Version 2.5.4 starting
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting Permit User Sessions...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started Notify NFS peers of a restart.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Finished Permit User Sessions.
Dec 13 03:07:49 np0005557965.novalocal sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 13 03:07:49 np0005557965.novalocal sshd[1005]: Server listening on :: port 22.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started Command Scheduler.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started Getty on tty1.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started Serial Getty on ttyS0.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Reached target Login Prompts.
Dec 13 03:07:49 np0005557965.novalocal crond[1009]: (CRON) STARTUP (1.5.7)
Dec 13 03:07:49 np0005557965.novalocal crond[1009]: (CRON) INFO (Syslog will be used instead of sendmail.)
Dec 13 03:07:49 np0005557965.novalocal crond[1009]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 32% if used.)
Dec 13 03:07:49 np0005557965.novalocal crond[1009]: (CRON) INFO (running with inotify support)
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started OpenSSH server daemon.
Dec 13 03:07:49 np0005557965.novalocal rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Dec 13 03:07:49 np0005557965.novalocal rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Started System Logging Service.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Reached target Multi-User System.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec 13 03:07:49 np0005557965.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Dec 13 03:07:49 np0005557965.novalocal rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:07:49 np0005557965.novalocal kdumpctl[1014]: kdump: No kdump initial ramdisk found.
Dec 13 03:07:49 np0005557965.novalocal kdumpctl[1014]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1132]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 13 Dec 2025 03:07:49 +0000. Up 8.70 seconds.
Dec 13 03:07:50 np0005557965.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Dec 13 03:07:50 np0005557965.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Dec 13 03:07:50 np0005557965.novalocal dracut[1265]: dracut-057-102.git20250818.el9
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1283]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 13 Dec 2025 03:07:50 +0000. Up 9.10 seconds.
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1295]: #############################################################
Dec 13 03:07:50 np0005557965.novalocal dracut[1267]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/cbdedf45-ed1d-4952-82a8-33a12c0ba266 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1299]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1308]: 256 SHA256:tYkvCENMM3vjeeS6Fck8C6+/NtV7qEDJqTIl5/6iH+0 root@np0005557965.novalocal (ECDSA)
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1317]: 256 SHA256:FNfFOw1D1IIUpU7rLfL+aBXRrLtO0mXpEBOXop7kReQ root@np0005557965.novalocal (ED25519)
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1320]: 3072 SHA256:UsYtVNIPvw2oP8oaFAMWUv+GgdHBJyaNjj+j5bnTpkM root@np0005557965.novalocal (RSA)
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1323]: -----END SSH HOST KEY FINGERPRINTS-----
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1327]: #############################################################
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1347]: Connection reset by 38.102.83.114 port 35372 [preauth]
Dec 13 03:07:50 np0005557965.novalocal cloud-init[1283]: Cloud-init v. 24.4-7.el9 finished at Sat, 13 Dec 2025 03:07:50 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 9.29 seconds
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1352]: Unable to negotiate with 38.102.83.114 port 35378: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1354]: Connection reset by 38.102.83.114 port 35394 [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1359]: Unable to negotiate with 38.102.83.114 port 35400: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Dec 13 03:07:50 np0005557965.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Dec 13 03:07:50 np0005557965.novalocal systemd[1]: Reached target Cloud-init target.
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1366]: Unable to negotiate with 38.102.83.114 port 35408: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1369]: Connection reset by 38.102.83.114 port 35420 [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1382]: Unable to negotiate with 38.102.83.114 port 35434: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1387]: Unable to negotiate with 38.102.83.114 port 35440: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Dec 13 03:07:50 np0005557965.novalocal sshd-session[1377]: Connection closed by 38.102.83.114 port 35424 [preauth]
Dec 13 03:07:50 np0005557965.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: Module 'resume' will not be installed, because it's in the list to be omitted!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: memstrack is not available
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: memstrack is not available
Dec 13 03:07:51 np0005557965.novalocal dracut[1267]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec 13 03:07:52 np0005557965.novalocal dracut[1267]: *** Including module: systemd ***
Dec 13 03:07:52 np0005557965.novalocal dracut[1267]: *** Including module: fips ***
Dec 13 03:07:52 np0005557965.novalocal chronyd[786]: Selected source 206.108.0.131 (2.centos.pool.ntp.org)
Dec 13 03:07:52 np0005557965.novalocal chronyd[786]: System clock TAI offset set to 37 seconds
Dec 13 03:07:52 np0005557965.novalocal dracut[1267]: *** Including module: systemd-initrd ***
Dec 13 03:07:52 np0005557965.novalocal dracut[1267]: *** Including module: i18n ***
Dec 13 03:07:53 np0005557965.novalocal dracut[1267]: *** Including module: drm ***
Dec 13 03:07:53 np0005557965.novalocal dracut[1267]: *** Including module: prefixdevname ***
Dec 13 03:07:53 np0005557965.novalocal dracut[1267]: *** Including module: kernel-modules ***
Dec 13 03:07:53 np0005557965.novalocal kernel: block vda: the capability attribute has been deprecated.
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: kernel-modules-extra ***
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: qemu ***
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: fstab-sys ***
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: rootfs-block ***
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: terminfo ***
Dec 13 03:07:54 np0005557965.novalocal dracut[1267]: *** Including module: udev-rules ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: Skipping udev rule: 91-permissions.rules
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: Skipping udev rule: 80-drivers-modprobe.rules
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: virtiofs ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: dracut-systemd ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: usrmount ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: base ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: fs-lib ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: kdumpbase ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]: *** Including module: microcode_ctl-fw_dir_override ***
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]:   microcode_ctl module: mangling fw_dir
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec 13 03:07:55 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]: *** Including module: openssl ***
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]: *** Including module: shutdown ***
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]: *** Including module: squash ***
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]: *** Including modules done ***
Dec 13 03:07:56 np0005557965.novalocal dracut[1267]: *** Installing kernel module dependencies ***
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 25 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Dec 13 03:07:57 np0005557965.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Dec 13 03:07:57 np0005557965.novalocal dracut[1267]: *** Installing kernel module dependencies done ***
Dec 13 03:07:57 np0005557965.novalocal dracut[1267]: *** Resolving executable dependencies ***
Dec 13 03:07:57 np0005557965.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: *** Resolving executable dependencies done ***
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: *** Generating early-microcode cpio image ***
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: *** Store current command line parameters ***
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: Stored kernel commandline:
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: No dracut internal kernel commandline stored in the initramfs
Dec 13 03:07:59 np0005557965.novalocal dracut[1267]: *** Install squash loader ***
Dec 13 03:08:00 np0005557965.novalocal dracut[1267]: *** Squashing the files inside the initramfs ***
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: *** Squashing the files inside the initramfs done ***
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: *** Hardlinking files ***
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Mode:           real
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Files:          50
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Linked:         0 files
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Compared:       0 xattrs
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Compared:       0 files
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Saved:          0 B
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: Duration:       0.001218 seconds
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: *** Hardlinking files done ***
Dec 13 03:08:01 np0005557965.novalocal dracut[1267]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec 13 03:08:02 np0005557965.novalocal kdumpctl[1014]: kdump: kexec: loaded kdump kernel
Dec 13 03:08:02 np0005557965.novalocal kdumpctl[1014]: kdump: Starting kdump: [OK]
Dec 13 03:08:02 np0005557965.novalocal systemd[1]: Finished Crash recovery kernel arming.
Dec 13 03:08:02 np0005557965.novalocal systemd[1]: Startup finished in 1.664s (kernel) + 2.364s (initrd) + 17.346s (userspace) = 21.375s.
Dec 13 03:08:09 np0005557965.novalocal sshd-session[4294]: Accepted publickey for zuul from 38.102.83.114 port 55004 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Dec 13 03:08:09 np0005557965.novalocal systemd[1]: Created slice User Slice of UID 1000.
Dec 13 03:08:09 np0005557965.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec 13 03:08:09 np0005557965.novalocal systemd-logind[796]: New session 1 of user zuul.
Dec 13 03:08:09 np0005557965.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec 13 03:08:09 np0005557965.novalocal systemd[1]: Starting User Manager for UID 1000...
Dec 13 03:08:09 np0005557965.novalocal systemd[4298]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Queued start job for default target Main User Target.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Created slice User Application Slice.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Reached target Paths.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Reached target Timers.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Starting D-Bus User Message Bus Socket...
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Starting Create User's Volatile Files and Directories...
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Listening on D-Bus User Message Bus Socket.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Reached target Sockets.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Finished Create User's Volatile Files and Directories.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Reached target Basic System.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Reached target Main User Target.
Dec 13 03:08:10 np0005557965.novalocal systemd[4298]: Startup finished in 121ms.
Dec 13 03:08:10 np0005557965.novalocal systemd[1]: Started User Manager for UID 1000.
Dec 13 03:08:10 np0005557965.novalocal systemd[1]: Started Session 1 of User zuul.
Dec 13 03:08:10 np0005557965.novalocal sshd-session[4294]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:08:10 np0005557965.novalocal python3[4381]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:08:13 np0005557965.novalocal python3[4409]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:08:17 np0005557965.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 03:08:19 np0005557965.novalocal python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:08:20 np0005557965.novalocal python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec 13 03:08:22 np0005557965.novalocal python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1Yy978d2Mo3D/tLBB3vvK8oPQrVb8H1fKCGGTHUMpxlmPcC4fGaHs0AM0fKq/ibylx/A6ZOfap4wp4KnNLXiCowW2OucEvgXsKsqz5dqJDvtG/AcJyualZAyBbctj5LC2ysXI/Zyf4g5SeIe6YgMd0g/D5yNqjlDPBCs8/umqxUPczNi9OMcGd+2hj2ewNdGGBuDAw+aPc2MKPLwoDm2Hju6cq6PM3VPf04qaCGjvNwdt2hBP5NbX1Ey6xf8y+KuPC+LawVktWer89DaZgV4cOj+rasHJkpfBQDrVZomIHnVRL+MMKcDLvACLCILinaUf2P3U7NrV3sV0y0h9pzAQ5HixX33HzJDQXVO58pWoA86++NgyvyAQ4ahzicZB9C3AXhbRUJfC45gGnCdge0/remDQU7zhOobZ7S3wVmaqND+IgNkKGrqOKzemLFyaDrt63sWZ4FqFEh1AsrUgzkOQoaQGLu496lTN3d5ljhu/h6uDAjVQB+l39/1q9oobtZ8= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:22 np0005557965.novalocal python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:22 np0005557965.novalocal python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:23 np0005557965.novalocal python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765595302.548861-207-214157923349146/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=34b692cd1488425e8647f92c9de0380c_id_rsa follow=False checksum=a1b2e90e6a84ee868970d0da50d693a8e22b11c2 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:23 np0005557965.novalocal python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:24 np0005557965.novalocal python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765595303.4935465-240-4342361890594/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=34b692cd1488425e8647f92c9de0380c_id_rsa.pub follow=False checksum=bd0455eb258283cc8187f51460e02de4afe4dc48 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:25 np0005557965.novalocal python3[4971]: ansible-ping Invoked with data=pong
Dec 13 03:08:26 np0005557965.novalocal python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:08:28 np0005557965.novalocal python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec 13 03:08:29 np0005557965.novalocal python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:29 np0005557965.novalocal python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:29 np0005557965.novalocal python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:29 np0005557965.novalocal python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:30 np0005557965.novalocal python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:30 np0005557965.novalocal python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:31 np0005557965.novalocal sudo[5229]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txtuzshqopmprcofaoegrhdfxpxrnvuo ; /usr/bin/python3'
Dec 13 03:08:31 np0005557965.novalocal sudo[5229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:31 np0005557965.novalocal python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:32 np0005557965.novalocal sudo[5229]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:32 np0005557965.novalocal sudo[5307]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myeajooarmagwwvarspqtbbqfvgnmcjm ; /usr/bin/python3'
Dec 13 03:08:32 np0005557965.novalocal sudo[5307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:32 np0005557965.novalocal python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:32 np0005557965.novalocal sudo[5307]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:32 np0005557965.novalocal sudo[5380]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buxdewivgnfezjkkxpcqbxjbolmnuxqd ; /usr/bin/python3'
Dec 13 03:08:32 np0005557965.novalocal sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:33 np0005557965.novalocal python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765595312.1785364-21-124403234998346/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:33 np0005557965.novalocal sudo[5380]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:33 np0005557965.novalocal python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:34 np0005557965.novalocal python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:34 np0005557965.novalocal python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:34 np0005557965.novalocal python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:34 np0005557965.novalocal python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:35 np0005557965.novalocal python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:35 np0005557965.novalocal python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:35 np0005557965.novalocal python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:36 np0005557965.novalocal python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:36 np0005557965.novalocal python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:36 np0005557965.novalocal python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:36 np0005557965.novalocal python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:37 np0005557965.novalocal python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:37 np0005557965.novalocal python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:37 np0005557965.novalocal python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:38 np0005557965.novalocal python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:38 np0005557965.novalocal python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:38 np0005557965.novalocal python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:38 np0005557965.novalocal python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:39 np0005557965.novalocal python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:39 np0005557965.novalocal python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:39 np0005557965.novalocal python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:40 np0005557965.novalocal python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:40 np0005557965.novalocal python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:40 np0005557965.novalocal python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:40 np0005557965.novalocal python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:08:43 np0005557965.novalocal sudo[6054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmybywrisrdvdpqaupkkeomamzqflvdz ; /usr/bin/python3'
Dec 13 03:08:43 np0005557965.novalocal sudo[6054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:44 np0005557965.novalocal python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 03:08:44 np0005557965.novalocal systemd[1]: Starting Time & Date Service...
Dec 13 03:08:44 np0005557965.novalocal systemd[1]: Started Time & Date Service.
Dec 13 03:08:44 np0005557965.novalocal systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec 13 03:08:44 np0005557965.novalocal sudo[6054]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:44 np0005557965.novalocal sudo[6085]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsqiwmqxfrumucoojgxhqjdtnevcosvq ; /usr/bin/python3'
Dec 13 03:08:44 np0005557965.novalocal sudo[6085]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:44 np0005557965.novalocal python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:44 np0005557965.novalocal sudo[6085]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:44 np0005557965.novalocal python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:45 np0005557965.novalocal python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765595324.6047864-153-274131459917276/source _original_basename=tmp7a9mt2ot follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:45 np0005557965.novalocal python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:46 np0005557965.novalocal python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765595325.4583051-183-280049596809646/source _original_basename=tmpzs0matcu follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:46 np0005557965.novalocal sudo[6505]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yheluoxgeuplsexksqnqddxzuzwaoaum ; /usr/bin/python3'
Dec 13 03:08:46 np0005557965.novalocal sudo[6505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:46 np0005557965.novalocal python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:46 np0005557965.novalocal sudo[6505]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:46 np0005557965.novalocal sudo[6578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzrshfndorervjhnnoqtmjnhywtablq ; /usr/bin/python3'
Dec 13 03:08:46 np0005557965.novalocal sudo[6578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:47 np0005557965.novalocal python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765595326.42463-231-256999596049431/source _original_basename=tmpcqubw5jx follow=False checksum=5af11a2484d4a32bfd779dd7279c8c1bc46ad659 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:47 np0005557965.novalocal sudo[6578]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:47 np0005557965.novalocal python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:08:47 np0005557965.novalocal python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:08:48 np0005557965.novalocal sudo[6732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffscqzmguhyhpczkjnroxtuhqqgtyjat ; /usr/bin/python3'
Dec 13 03:08:48 np0005557965.novalocal sudo[6732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:48 np0005557965.novalocal python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:08:48 np0005557965.novalocal sudo[6732]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:48 np0005557965.novalocal sudo[6805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpzguxkmojcrjjpbiulrthdcknbaqtzq ; /usr/bin/python3'
Dec 13 03:08:48 np0005557965.novalocal sudo[6805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:48 np0005557965.novalocal python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765595328.0973973-273-44952850683783/source _original_basename=tmpjhainmfw follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:08:48 np0005557965.novalocal sudo[6805]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:49 np0005557965.novalocal sudo[6856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzwfjehhmuqzmesaglfladhqbiihnxwj ; /usr/bin/python3'
Dec 13 03:08:49 np0005557965.novalocal sudo[6856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:08:49 np0005557965.novalocal python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-7f84-cfdf-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:08:49 np0005557965.novalocal sudo[6856]: pam_unix(sudo:session): session closed for user root
Dec 13 03:08:49 np0005557965.novalocal python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-7f84-cfdf-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec 13 03:08:51 np0005557965.novalocal python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:09:07 np0005557965.novalocal sudo[6938]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbetcepngnstbsnlltwyfctrfvgsscoo ; /usr/bin/python3'
Dec 13 03:09:07 np0005557965.novalocal sudo[6938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:09:07 np0005557965.novalocal python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:09:07 np0005557965.novalocal sudo[6938]: pam_unix(sudo:session): session closed for user root
Dec 13 03:09:14 np0005557965.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec 13 03:09:40 np0005557965.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec 13 03:09:40 np0005557965.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9685] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 03:09:40 np0005557965.novalocal systemd-udevd[6944]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9822] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9854] settings: (eth1): created default wired connection 'Wired connection 1'
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9858] device (eth1): carrier: link connected
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9861] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9870] policy: auto-activating connection 'Wired connection 1' (df9cc862-e866-3605-9581-a6789d75c0d4)
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9877] device (eth1): Activation: starting connection 'Wired connection 1' (df9cc862-e866-3605-9581-a6789d75c0d4)
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9877] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9880] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9885] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:09:40 np0005557965.novalocal NetworkManager[857]: <info>  [1765595380.9889] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:09:42 np0005557965.novalocal python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-eaf9-ebfb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:09:51 np0005557965.novalocal sudo[7048]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpvxfdiuxsozpdklmbcfvkieacvfgbey ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 13 03:09:51 np0005557965.novalocal sudo[7048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:09:51 np0005557965.novalocal python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:09:51 np0005557965.novalocal sudo[7048]: pam_unix(sudo:session): session closed for user root
Dec 13 03:09:52 np0005557965.novalocal sudo[7121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fybotjwzyhamhsmajtkmwehpwopyyvzk ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 13 03:09:52 np0005557965.novalocal sudo[7121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:09:52 np0005557965.novalocal python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765595391.753658-102-126016015401830/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=066357f6d52a42f21d62cd4877f314d5c38fa5e6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:09:52 np0005557965.novalocal sudo[7121]: pam_unix(sudo:session): session closed for user root
Dec 13 03:09:52 np0005557965.novalocal sudo[7171]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujnszfiszyiqklrewqbkzevgltqdhvzb ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 13 03:09:52 np0005557965.novalocal sudo[7171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:09:53 np0005557965.novalocal python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Stopped Network Manager Wait Online.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Stopping Network Manager Wait Online...
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Stopping Network Manager...
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3012] caught SIGTERM, shutting down normally.
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3025] dhcp4 (eth0): canceled DHCP transaction
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3025] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3025] dhcp4 (eth0): state changed no lease
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3030] manager: NetworkManager state is now CONNECTING
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3121] dhcp4 (eth1): canceled DHCP transaction
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3121] dhcp4 (eth1): state changed no lease
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[857]: <info>  [1765595393.3193] exiting (success)
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Stopped Network Manager.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Starting Network Manager...
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.3894] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9ad30d9f-3581-40c3-b7be-ea7e23726ec3)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.3897] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.3975] manager[0x556fb039c000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Starting Hostname Service...
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Started Hostname Service.
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4832] hostname: hostname: using hostnamed
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4837] hostname: static hostname changed from (none) to "np0005557965.novalocal"
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4846] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4854] manager[0x556fb039c000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4854] manager[0x556fb039c000]: rfkill: WWAN hardware radio set enabled
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4906] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4907] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4908] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4909] manager: Networking is enabled by state file
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4913] settings: Loaded settings plugin: keyfile (internal)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4922] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4975] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4992] dhcp: init: Using DHCP client 'internal'
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.4997] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5006] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5016] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5028] device (lo): Activation: starting connection 'lo' (1b002e4b-157a-4f55-90b2-cce7be34ec02)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5040] device (eth0): carrier: link connected
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5047] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5055] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5056] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5067] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5079] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5089] device (eth1): carrier: link connected
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5096] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5104] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (df9cc862-e866-3605-9581-a6789d75c0d4) (indicated)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5105] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5113] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5127] device (eth1): Activation: starting connection 'Wired connection 1' (df9cc862-e866-3605-9581-a6789d75c0d4)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5139] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Started Network Manager.
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5146] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5151] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5156] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5161] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5168] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5172] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5177] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5183] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5197] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5203] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5217] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5221] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5235] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5240] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5245] device (lo): Activation: successful, device activated.
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5257] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5262] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5324] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal systemd[1]: Starting Network Manager Wait Online...
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5358] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5360] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5363] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5365] device (eth0): Activation: successful, device activated.
Dec 13 03:09:53 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595393.5373] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 03:09:53 np0005557965.novalocal sudo[7171]: pam_unix(sudo:session): session closed for user root
Dec 13 03:09:53 np0005557965.novalocal python3[7258]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-eaf9-ebfb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:10:03 np0005557965.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:10:23 np0005557965.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2661] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 03:10:39 np0005557965.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:10:39 np0005557965.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2926] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2930] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2937] device (eth1): Activation: successful, device activated.
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2943] manager: startup complete
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2946] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <warn>  [1765595439.2952] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.2960] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal systemd[1]: Finished Network Manager Wait Online.
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3151] dhcp4 (eth1): canceled DHCP transaction
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3152] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3152] dhcp4 (eth1): state changed no lease
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3166] policy: auto-activating connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353)
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3169] device (eth1): Activation: starting connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353)
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3170] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3172] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3177] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3184] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3228] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3229] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:10:39 np0005557965.novalocal NetworkManager[7186]: <info>  [1765595439.3232] device (eth1): Activation: successful, device activated.
Dec 13 03:10:49 np0005557965.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:10:53 np0005557965.novalocal sshd-session[4308]: Received disconnect from 38.102.83.114 port 55004:11: disconnected by user
Dec 13 03:10:53 np0005557965.novalocal sshd-session[4308]: Disconnected from user zuul 38.102.83.114 port 55004
Dec 13 03:10:53 np0005557965.novalocal sshd-session[4294]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:10:53 np0005557965.novalocal systemd-logind[796]: Session 1 logged out. Waiting for processes to exit.
Dec 13 03:10:53 np0005557965.novalocal sshd-session[7287]: Accepted publickey for zuul from 38.102.83.114 port 45704 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:10:53 np0005557965.novalocal systemd-logind[796]: New session 3 of user zuul.
Dec 13 03:10:53 np0005557965.novalocal systemd[1]: Started Session 3 of User zuul.
Dec 13 03:10:53 np0005557965.novalocal sshd-session[7287]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:10:54 np0005557965.novalocal sudo[7366]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxgbholakrbrqpsonnewqehvdrywbxhe ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 13 03:10:54 np0005557965.novalocal sudo[7366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:10:54 np0005557965.novalocal python3[7368]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:10:54 np0005557965.novalocal sudo[7366]: pam_unix(sudo:session): session closed for user root
Dec 13 03:10:54 np0005557965.novalocal sudo[7439]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tluwpcesinsarjlzdrzeoylvoagvwrtq ; OS_CLOUD=vexxhost /usr/bin/python3'
Dec 13 03:10:54 np0005557965.novalocal sudo[7439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:10:54 np0005557965.novalocal python3[7441]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765595453.9911385-267-33204331152937/source _original_basename=tmp67_1q05b follow=False checksum=54620c0b539a8175ea37dad0230381d0d63d1a24 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:10:54 np0005557965.novalocal sudo[7439]: pam_unix(sudo:session): session closed for user root
Dec 13 03:10:54 np0005557965.novalocal systemd[4298]: Starting Mark boot as successful...
Dec 13 03:10:54 np0005557965.novalocal systemd[4298]: Finished Mark boot as successful.
Dec 13 03:10:56 np0005557965.novalocal sshd-session[7290]: Connection closed by 38.102.83.114 port 45704
Dec 13 03:10:56 np0005557965.novalocal sshd-session[7287]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:10:56 np0005557965.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Dec 13 03:10:56 np0005557965.novalocal systemd-logind[796]: Session 3 logged out. Waiting for processes to exit.
Dec 13 03:10:56 np0005557965.novalocal systemd-logind[796]: Removed session 3.
Dec 13 03:12:14 np0005557965.novalocal chronyd[786]: Selected source 147.189.136.126 (2.centos.pool.ntp.org)
Dec 13 03:13:54 np0005557965.novalocal systemd[4298]: Created slice User Background Tasks Slice.
Dec 13 03:13:54 np0005557965.novalocal systemd[4298]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 03:13:54 np0005557965.novalocal systemd[4298]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 03:15:12 np0005557965.novalocal sshd-session[7474]: Accepted publickey for zuul from 38.102.83.114 port 40018 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:15:12 np0005557965.novalocal systemd-logind[796]: New session 4 of user zuul.
Dec 13 03:15:12 np0005557965.novalocal systemd[1]: Started Session 4 of User zuul.
Dec 13 03:15:12 np0005557965.novalocal sshd-session[7474]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:15:12 np0005557965.novalocal sudo[7501]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuvardaupkxchwxwawqekmhfktyaosug ; /usr/bin/python3'
Dec 13 03:15:12 np0005557965.novalocal sudo[7501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:12 np0005557965.novalocal python3[7503]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-52f3-93a7-000000001f53-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:12 np0005557965.novalocal sudo[7501]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:13 np0005557965.novalocal sudo[7530]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xresqnsfrbnkoaamwsylolrufrzbfvay ; /usr/bin/python3'
Dec 13 03:15:13 np0005557965.novalocal sudo[7530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:13 np0005557965.novalocal python3[7532]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:13 np0005557965.novalocal sudo[7530]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:13 np0005557965.novalocal sudo[7556]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkehqovnbguxkfmmywxnigqmcjonzyta ; /usr/bin/python3'
Dec 13 03:15:13 np0005557965.novalocal sudo[7556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:13 np0005557965.novalocal python3[7558]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:13 np0005557965.novalocal sudo[7556]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:13 np0005557965.novalocal sudo[7582]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdzjnhgtnuenpsuogtutgducdqhwvzae ; /usr/bin/python3'
Dec 13 03:15:13 np0005557965.novalocal sudo[7582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:13 np0005557965.novalocal python3[7584]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:13 np0005557965.novalocal sudo[7582]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:13 np0005557965.novalocal sudo[7608]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqdhyvvaceiqjyurgaqlkbfccmovqgg ; /usr/bin/python3'
Dec 13 03:15:13 np0005557965.novalocal sudo[7608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:13 np0005557965.novalocal python3[7610]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:13 np0005557965.novalocal sudo[7608]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:14 np0005557965.novalocal sudo[7634]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgrleksvcrizmskkpkawyivomuwvqunl ; /usr/bin/python3'
Dec 13 03:15:14 np0005557965.novalocal sudo[7634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:14 np0005557965.novalocal python3[7636]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:14 np0005557965.novalocal sudo[7634]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:14 np0005557965.novalocal sudo[7712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsxmzylbtbdbnaxzncsyuppbjscghmbr ; /usr/bin/python3'
Dec 13 03:15:14 np0005557965.novalocal sudo[7712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:14 np0005557965.novalocal python3[7714]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:15:14 np0005557965.novalocal sudo[7712]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:14 np0005557965.novalocal sudo[7785]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtnocxjilwnynvbmuzyadzskwakjnrnv ; /usr/bin/python3'
Dec 13 03:15:14 np0005557965.novalocal sudo[7785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:15 np0005557965.novalocal python3[7787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765595714.5140603-474-58308526885685/source _original_basename=tmpsgj10tjr follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:15:15 np0005557965.novalocal sudo[7785]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:15 np0005557965.novalocal sudo[7835]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkipmwonkjgdjnffdyjeuudbbgetdjau ; /usr/bin/python3'
Dec 13 03:15:15 np0005557965.novalocal sudo[7835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:15 np0005557965.novalocal python3[7837]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 03:15:15 np0005557965.novalocal systemd[1]: Reloading.
Dec 13 03:15:16 np0005557965.novalocal systemd-rc-local-generator[7856]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:15:16 np0005557965.novalocal sudo[7835]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:17 np0005557965.novalocal sudo[7890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jywyipjetqzwtiqaptjjckfneiaiyhse ; /usr/bin/python3'
Dec 13 03:15:17 np0005557965.novalocal sudo[7890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:17 np0005557965.novalocal python3[7892]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec 13 03:15:17 np0005557965.novalocal sudo[7890]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:17 np0005557965.novalocal sudo[7916]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqdtiocmqpzflsotjbrvvzrrmnnjbqci ; /usr/bin/python3'
Dec 13 03:15:17 np0005557965.novalocal sudo[7916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:17 np0005557965.novalocal python3[7918]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:17 np0005557965.novalocal sudo[7916]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:18 np0005557965.novalocal sudo[7944]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skvxddvakzxukqibycpojwestdcucipz ; /usr/bin/python3'
Dec 13 03:15:18 np0005557965.novalocal sudo[7944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:18 np0005557965.novalocal python3[7946]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:18 np0005557965.novalocal sudo[7944]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:18 np0005557965.novalocal sudo[7972]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvseanermhlogmabcppfarxfxzdjfqd ; /usr/bin/python3'
Dec 13 03:15:18 np0005557965.novalocal sudo[7972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:18 np0005557965.novalocal python3[7974]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:18 np0005557965.novalocal sudo[7972]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:18 np0005557965.novalocal sudo[8000]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnlzzpqpkiqrmkwsynqoinpuoldmyftc ; /usr/bin/python3'
Dec 13 03:15:18 np0005557965.novalocal sudo[8000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:18 np0005557965.novalocal python3[8002]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:18 np0005557965.novalocal sudo[8000]: pam_unix(sudo:session): session closed for user root
Dec 13 03:15:19 np0005557965.novalocal python3[8029]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163e3b-3c83-52f3-93a7-000000001f5a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:15:19 np0005557965.novalocal python3[8059]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:15:21 np0005557965.novalocal sshd-session[7477]: Connection closed by 38.102.83.114 port 40018
Dec 13 03:15:21 np0005557965.novalocal sshd-session[7474]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:15:21 np0005557965.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Dec 13 03:15:21 np0005557965.novalocal systemd-logind[796]: Session 4 logged out. Waiting for processes to exit.
Dec 13 03:15:21 np0005557965.novalocal systemd[1]: session-4.scope: Consumed 3.655s CPU time.
Dec 13 03:15:21 np0005557965.novalocal systemd-logind[796]: Removed session 4.
Dec 13 03:15:23 np0005557965.novalocal sshd-session[8067]: Accepted publickey for zuul from 38.102.83.114 port 49932 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:15:23 np0005557965.novalocal systemd-logind[796]: New session 5 of user zuul.
Dec 13 03:15:23 np0005557965.novalocal systemd[1]: Started Session 5 of User zuul.
Dec 13 03:15:23 np0005557965.novalocal sshd-session[8067]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:15:23 np0005557965.novalocal sudo[8094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrxopeehexyoxmaionskezwjrgzqlrts ; /usr/bin/python3'
Dec 13 03:15:23 np0005557965.novalocal sudo[8094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:15:23 np0005557965.novalocal python3[8096]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:15:40 np0005557965.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:15:58 np0005557965.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  Converting 385 SID table entries...
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:16:10 np0005557965.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:16:13 np0005557965.novalocal setsebool[8159]: The virt_use_nfs policy boolean was changed to 1 by root
Dec 13 03:16:13 np0005557965.novalocal setsebool[8159]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  Converting 388 SID table entries...
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability open_perms=1
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:16:26 np0005557965.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:16:44 np0005557965.novalocal dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 03:16:44 np0005557965.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:16:44 np0005557965.novalocal systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:16:44 np0005557965.novalocal systemd[1]: Reloading.
Dec 13 03:16:44 np0005557965.novalocal systemd-rc-local-generator[8910]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:16:44 np0005557965.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:16:46 np0005557965.novalocal sudo[8094]: pam_unix(sudo:session): session closed for user root
Dec 13 03:16:47 np0005557965.novalocal irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Dec 13 03:16:47 np0005557965.novalocal irqbalance[789]: IRQ 27 affinity is now unmanaged
Dec 13 03:16:47 np0005557965.novalocal python3[10581]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163e3b-3c83-36ed-7413-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:16:48 np0005557965.novalocal kernel: evm: overlay not supported
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: Starting D-Bus User Message Bus...
Dec 13 03:16:48 np0005557965.novalocal dbus-broker-launch[11864]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec 13 03:16:48 np0005557965.novalocal dbus-broker-launch[11864]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: Started D-Bus User Message Bus.
Dec 13 03:16:48 np0005557965.novalocal dbus-broker-lau[11864]: Ready
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: Created slice Slice /user.
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: podman-11725.scope: unit configures an IP firewall, but not running as root.
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: (This warning is only shown for the first unit using IP firewalling.)
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: Started podman-11725.scope.
Dec 13 03:16:48 np0005557965.novalocal systemd[4298]: Started podman-pause-d7a38ef2.scope.
Dec 13 03:16:49 np0005557965.novalocal sudo[12885]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlzhanajuivcjeqertpqnfxcvtikqhij ; /usr/bin/python3'
Dec 13 03:16:49 np0005557965.novalocal sudo[12885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:16:49 np0005557965.novalocal python3[12910]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.18:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:16:49 np0005557965.novalocal python3[12910]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec 13 03:16:49 np0005557965.novalocal sudo[12885]: pam_unix(sudo:session): session closed for user root
Dec 13 03:16:50 np0005557965.novalocal sshd-session[8070]: Connection closed by 38.102.83.114 port 49932
Dec 13 03:16:50 np0005557965.novalocal sshd-session[8067]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:16:50 np0005557965.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Dec 13 03:16:50 np0005557965.novalocal systemd[1]: session-5.scope: Consumed 1min 11.307s CPU time.
Dec 13 03:16:50 np0005557965.novalocal systemd-logind[796]: Session 5 logged out. Waiting for processes to exit.
Dec 13 03:16:50 np0005557965.novalocal systemd-logind[796]: Removed session 5.
Dec 13 03:17:10 np0005557965.novalocal sshd-session[22948]: Connection closed by 38.102.83.147 port 57810 [preauth]
Dec 13 03:17:10 np0005557965.novalocal sshd-session[22949]: Connection closed by 38.102.83.147 port 57814 [preauth]
Dec 13 03:17:10 np0005557965.novalocal sshd-session[22954]: Unable to negotiate with 38.102.83.147 port 57828: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 13 03:17:10 np0005557965.novalocal sshd-session[22955]: Unable to negotiate with 38.102.83.147 port 57840: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 13 03:17:10 np0005557965.novalocal sshd-session[22952]: Unable to negotiate with 38.102.83.147 port 57848: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 13 03:17:15 np0005557965.novalocal sshd-session[25240]: Accepted publickey for zuul from 38.102.83.114 port 42026 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:17:15 np0005557965.novalocal systemd-logind[796]: New session 6 of user zuul.
Dec 13 03:17:15 np0005557965.novalocal systemd[1]: Started Session 6 of User zuul.
Dec 13 03:17:15 np0005557965.novalocal sshd-session[25240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:17:15 np0005557965.novalocal python3[25348]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMnThwEexGr424rmQzfkLH8nvMHzO7C7+nA8NLrESG/EIHr60opK/GUlHmjN12B5JuOnCeP3I1SpGYkA5+8aV48= zuul@np0005557964.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:17:16 np0005557965.novalocal sudo[25595]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knqshpmvqrijzoosajzsbyimfdwunlbx ; /usr/bin/python3'
Dec 13 03:17:16 np0005557965.novalocal sudo[25595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:16 np0005557965.novalocal python3[25605]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMnThwEexGr424rmQzfkLH8nvMHzO7C7+nA8NLrESG/EIHr60opK/GUlHmjN12B5JuOnCeP3I1SpGYkA5+8aV48= zuul@np0005557964.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:17:16 np0005557965.novalocal sudo[25595]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:17 np0005557965.novalocal sudo[26061]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcqvuikuhsipbrapqmnysdwspbesejjj ; /usr/bin/python3'
Dec 13 03:17:17 np0005557965.novalocal sudo[26061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:17 np0005557965.novalocal python3[26072]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005557965.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec 13 03:17:17 np0005557965.novalocal useradd[26165]: new group: name=cloud-admin, GID=1002
Dec 13 03:17:17 np0005557965.novalocal useradd[26165]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Dec 13 03:17:17 np0005557965.novalocal sudo[26061]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:17 np0005557965.novalocal sudo[26299]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxxzwfwsekphtgdnitigenultrnzhudh ; /usr/bin/python3'
Dec 13 03:17:17 np0005557965.novalocal sudo[26299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:17 np0005557965.novalocal python3[26310]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMnThwEexGr424rmQzfkLH8nvMHzO7C7+nA8NLrESG/EIHr60opK/GUlHmjN12B5JuOnCeP3I1SpGYkA5+8aV48= zuul@np0005557964.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec 13 03:17:17 np0005557965.novalocal sudo[26299]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:17 np0005557965.novalocal sudo[26588]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naymhlsxpovjrygbqbscjfkrkqqtsnvj ; /usr/bin/python3'
Dec 13 03:17:17 np0005557965.novalocal sudo[26588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:18 np0005557965.novalocal python3[26599]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:17:18 np0005557965.novalocal sudo[26588]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:18 np0005557965.novalocal sudo[26882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjnxazwhfuetqrdfqnvwibgmitdclqb ; /usr/bin/python3'
Dec 13 03:17:18 np0005557965.novalocal sudo[26882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:18 np0005557965.novalocal python3[26890]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765595837.8267834-135-252172826777341/source _original_basename=tmpdj04j4aj follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:17:18 np0005557965.novalocal sudo[26882]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:19 np0005557965.novalocal sudo[27262]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afacecrunxelympsqcnsgcxnecuobyzy ; /usr/bin/python3'
Dec 13 03:17:19 np0005557965.novalocal sudo[27262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:17:19 np0005557965.novalocal python3[27272]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec 13 03:17:19 np0005557965.novalocal systemd[1]: Starting Hostname Service...
Dec 13 03:17:19 np0005557965.novalocal systemd[1]: Started Hostname Service.
Dec 13 03:17:19 np0005557965.novalocal systemd-hostnamed[27394]: Changed pretty hostname to 'compute-0'
Dec 13 03:17:19 compute-0 systemd-hostnamed[27394]: Hostname set to <compute-0> (static)
Dec 13 03:17:19 compute-0 NetworkManager[7186]: <info>  [1765595839.4324] hostname: static hostname changed from "np0005557965.novalocal" to "compute-0"
Dec 13 03:17:19 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:17:19 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:17:19 compute-0 sudo[27262]: pam_unix(sudo:session): session closed for user root
Dec 13 03:17:20 compute-0 sshd-session[25288]: Connection closed by 38.102.83.114 port 42026
Dec 13 03:17:20 compute-0 sshd-session[25240]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:17:20 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Dec 13 03:17:20 compute-0 systemd[1]: session-6.scope: Consumed 2.236s CPU time.
Dec 13 03:17:20 compute-0 systemd-logind[796]: Session 6 logged out. Waiting for processes to exit.
Dec 13 03:17:20 compute-0 systemd-logind[796]: Removed session 6.
Dec 13 03:17:27 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:17:27 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:17:27 compute-0 systemd[1]: man-db-cache-update.service: Consumed 46.661s CPU time.
Dec 13 03:17:27 compute-0 systemd[1]: run-rcfdf9f9d42d142b7994cbbdd2a2681e7.service: Deactivated successfully.
Dec 13 03:17:29 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:17:49 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 03:22:17 compute-0 sshd-session[29912]: Accepted publickey for zuul from 38.102.83.147 port 36418 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:22:17 compute-0 systemd-logind[796]: New session 7 of user zuul.
Dec 13 03:22:17 compute-0 systemd[1]: Started Session 7 of User zuul.
Dec 13 03:22:17 compute-0 sshd-session[29912]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:22:17 compute-0 python3[29988]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:22:19 compute-0 sudo[30102]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wngrcygduwrkqoxdnxxmzvyjblycngwg ; /usr/bin/python3'
Dec 13 03:22:19 compute-0 sudo[30102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:19 compute-0 python3[30104]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:19 compute-0 sudo[30102]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:19 compute-0 sudo[30175]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffypgsfjvznifrtouitnpauwubhlphib ; /usr/bin/python3'
Dec 13 03:22:19 compute-0 sudo[30175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:20 compute-0 python3[30177]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:20 compute-0 sudo[30175]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:20 compute-0 sudo[30201]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssukknowowecvyhbgmyfbtbangcigwfn ; /usr/bin/python3'
Dec 13 03:22:20 compute-0 sudo[30201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:20 compute-0 python3[30203]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:20 compute-0 sudo[30201]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:20 compute-0 sudo[30274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-naissuttikhwklkeoivumzitnjqcwaav ; /usr/bin/python3'
Dec 13 03:22:20 compute-0 sudo[30274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:20 compute-0 python3[30276]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:20 compute-0 sudo[30274]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:20 compute-0 sudo[30300]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kechiwvjgwqdjxwilgkfxfgevctphlqa ; /usr/bin/python3'
Dec 13 03:22:20 compute-0 sudo[30300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:20 compute-0 python3[30302]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:20 compute-0 sudo[30300]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:21 compute-0 sudo[30373]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhkqkxobcurvcgqbazcxxrfetcoauclr ; /usr/bin/python3'
Dec 13 03:22:21 compute-0 sudo[30373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:21 compute-0 python3[30375]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:21 compute-0 sudo[30373]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:21 compute-0 sudo[30399]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urciudeililmvvivjugherwyqxpccnzh ; /usr/bin/python3'
Dec 13 03:22:21 compute-0 sudo[30399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:21 compute-0 python3[30401]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:21 compute-0 sudo[30399]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:21 compute-0 sudo[30472]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzufphqqthmlvvfdnvjfllpvzfetqzcx ; /usr/bin/python3'
Dec 13 03:22:21 compute-0 sudo[30472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:21 compute-0 python3[30474]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:21 compute-0 sudo[30472]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:21 compute-0 sudo[30498]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdwwmcdabemqtyyuvbpjsgpmwcyoimsw ; /usr/bin/python3'
Dec 13 03:22:21 compute-0 sudo[30498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:21 compute-0 python3[30500]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:22 compute-0 sudo[30498]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:22 compute-0 sudo[30571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrhdmkobqkzxzrzgtqfstzyuwaamsmws ; /usr/bin/python3'
Dec 13 03:22:22 compute-0 sudo[30571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:22 compute-0 python3[30573]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:22 compute-0 sudo[30571]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:22 compute-0 sudo[30597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xagcizoqomhhqgalxxmtcdvydlkimtvg ; /usr/bin/python3'
Dec 13 03:22:22 compute-0 sudo[30597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:22 compute-0 python3[30599]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:22 compute-0 sudo[30597]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:22 compute-0 sudo[30670]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yndkdlflgbmuznayuxdbeoskidphuoxl ; /usr/bin/python3'
Dec 13 03:22:22 compute-0 sudo[30670]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:22 compute-0 python3[30672]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:22 compute-0 sudo[30670]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:22 compute-0 sudo[30697]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzhxrafpnvmhviczwkhnaiuqzrurckzr ; /usr/bin/python3'
Dec 13 03:22:22 compute-0 sudo[30697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:23 compute-0 python3[30699]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:22:23 compute-0 sudo[30697]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:23 compute-0 sudo[30770]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sziayktcwrbinbzzlbkiwajautesloot ; /usr/bin/python3'
Dec 13 03:22:23 compute-0 sudo[30770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:22:23 compute-0 python3[30772]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765596139.229874-33595-275194442157359/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:22:23 compute-0 sudo[30770]: pam_unix(sudo:session): session closed for user root
Dec 13 03:22:26 compute-0 sshd-session[30799]: Unable to negotiate with 192.168.122.11 port 37040: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Dec 13 03:22:26 compute-0 sshd-session[30800]: Unable to negotiate with 192.168.122.11 port 37044: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Dec 13 03:22:26 compute-0 sshd-session[30801]: Unable to negotiate with 192.168.122.11 port 37056: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Dec 13 03:22:26 compute-0 sshd-session[30797]: Connection closed by 192.168.122.11 port 37026 [preauth]
Dec 13 03:22:26 compute-0 sshd-session[30798]: Connection closed by 192.168.122.11 port 37036 [preauth]
Dec 13 03:22:36 compute-0 python3[30830]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:22:54 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Dec 13 03:22:54 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec 13 03:22:54 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Dec 13 03:22:54 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec 13 03:23:57 compute-0 sshd-session[30836]: error: kex_exchange_identification: read: Connection reset by peer
Dec 13 03:23:57 compute-0 sshd-session[30836]: Connection reset by 91.231.89.142 port 42231
Dec 13 03:24:07 compute-0 sshd-session[30837]: Connection closed by 91.231.89.243 port 34045
Dec 13 03:24:22 compute-0 sshd-session[30839]: banner exchange: Connection from 91.231.89.242 port 44507: invalid format
Dec 13 03:24:25 compute-0 sshd-session[30840]: Connection closed by 91.231.89.240 port 53711
Dec 13 03:27:36 compute-0 sshd-session[29915]: Received disconnect from 38.102.83.147 port 36418:11: disconnected by user
Dec 13 03:27:36 compute-0 sshd-session[29915]: Disconnected from user zuul 38.102.83.147 port 36418
Dec 13 03:27:36 compute-0 sshd-session[29912]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:27:36 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Dec 13 03:27:36 compute-0 systemd[1]: session-7.scope: Consumed 4.690s CPU time.
Dec 13 03:27:36 compute-0 systemd-logind[796]: Session 7 logged out. Waiting for processes to exit.
Dec 13 03:27:36 compute-0 systemd-logind[796]: Removed session 7.
Dec 13 03:31:12 compute-0 sshd-session[30845]: Unable to negotiate with 91.196.152.189 port 51733: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Dec 13 03:32:40 compute-0 sshd-session[30847]: Unable to negotiate with 91.231.89.230 port 34799: no matching host key type found. Their offer: ssh-rsa,ssh-dss [preauth]
Dec 13 03:34:15 compute-0 sshd-session[30849]: Accepted publickey for zuul from 192.168.122.30 port 41980 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:34:15 compute-0 systemd-logind[796]: New session 8 of user zuul.
Dec 13 03:34:15 compute-0 systemd[1]: Started Session 8 of User zuul.
Dec 13 03:34:15 compute-0 sshd-session[30849]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:34:16 compute-0 python3.9[31002]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:34:17 compute-0 sudo[31181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciaekvrdzjhdluraapevtuarzsqcpfum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596857.5743294-32-67467120790099/AnsiballZ_command.py'
Dec 13 03:34:17 compute-0 sudo[31181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:18 compute-0 python3.9[31183]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:34:29 compute-0 sudo[31181]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:29 compute-0 sshd-session[30852]: Connection closed by 192.168.122.30 port 41980
Dec 13 03:34:29 compute-0 sshd-session[30849]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:34:29 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Dec 13 03:34:29 compute-0 systemd[1]: session-8.scope: Consumed 8.193s CPU time.
Dec 13 03:34:29 compute-0 systemd-logind[796]: Session 8 logged out. Waiting for processes to exit.
Dec 13 03:34:29 compute-0 systemd-logind[796]: Removed session 8.
Dec 13 03:34:46 compute-0 sshd-session[31240]: Accepted publickey for zuul from 192.168.122.30 port 35818 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:34:46 compute-0 systemd-logind[796]: New session 9 of user zuul.
Dec 13 03:34:46 compute-0 systemd[1]: Started Session 9 of User zuul.
Dec 13 03:34:46 compute-0 sshd-session[31240]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:34:46 compute-0 python3.9[31393]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 03:34:47 compute-0 python3.9[31567]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:34:48 compute-0 sudo[31717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbhlgxkqvgtczwedfrghuwnrlizozkbf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596888.1805577-45-231943632060017/AnsiballZ_command.py'
Dec 13 03:34:48 compute-0 sudo[31717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:48 compute-0 python3.9[31719]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:34:48 compute-0 sudo[31717]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:49 compute-0 sudo[31870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcvmhcylzimasrwlthlrqodagnrgdkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596889.1898131-57-156314505002751/AnsiballZ_stat.py'
Dec 13 03:34:49 compute-0 sudo[31870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:49 compute-0 python3.9[31872]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:34:49 compute-0 sudo[31870]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:50 compute-0 sudo[32022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huxtopytwkefjnjalfzvokbotnhjqbub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596889.9402716-65-102876357520472/AnsiballZ_file.py'
Dec 13 03:34:50 compute-0 sudo[32022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:50 compute-0 python3.9[32024]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:34:50 compute-0 sudo[32022]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:51 compute-0 sudo[32174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aznbpfrtryergezqvtwdyytltesinqwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596890.7785525-73-33759977407185/AnsiballZ_stat.py'
Dec 13 03:34:51 compute-0 sudo[32174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:51 compute-0 python3.9[32176]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:34:51 compute-0 sudo[32174]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:51 compute-0 sudo[32297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yufuofkslziffwfkmnalyiybpqquynqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596890.7785525-73-33759977407185/AnsiballZ_copy.py'
Dec 13 03:34:51 compute-0 sudo[32297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:51 compute-0 python3.9[32299]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765596890.7785525-73-33759977407185/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:34:51 compute-0 sudo[32297]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:52 compute-0 sudo[32449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxgdcphddslacomoiawyydfzeanlsilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596892.0749087-88-84282333686380/AnsiballZ_setup.py'
Dec 13 03:34:52 compute-0 sudo[32449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:52 compute-0 python3.9[32451]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:34:52 compute-0 sudo[32449]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:53 compute-0 sudo[32605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffdyiwmfukinmxlishdcppygpnxkkipf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596892.937596-96-121675966372385/AnsiballZ_file.py'
Dec 13 03:34:53 compute-0 sudo[32605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:53 compute-0 python3.9[32607]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:34:53 compute-0 sudo[32605]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:53 compute-0 sudo[32757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijvizioezvraebifwaotyymoxwnvryfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596893.5601704-105-54664569192415/AnsiballZ_file.py'
Dec 13 03:34:53 compute-0 sudo[32757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:34:54 compute-0 python3.9[32759]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:34:54 compute-0 sudo[32757]: pam_unix(sudo:session): session closed for user root
Dec 13 03:34:54 compute-0 python3.9[32909]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:35:00 compute-0 python3.9[33162]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:35:00 compute-0 python3.9[33312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:35:01 compute-0 python3.9[33466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:35:02 compute-0 sudo[33622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trjlvedigkmaqhzxeaeppkggcwqxvtlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596902.1263285-153-84099450453746/AnsiballZ_setup.py'
Dec 13 03:35:02 compute-0 sudo[33622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:35:02 compute-0 python3.9[33624]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:35:02 compute-0 sudo[33622]: pam_unix(sudo:session): session closed for user root
Dec 13 03:35:03 compute-0 sudo[33706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqhuroiuvhycnbrxlfycdgwqikddyjr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765596902.1263285-153-84099450453746/AnsiballZ_dnf.py'
Dec 13 03:35:03 compute-0 sudo[33706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:35:03 compute-0 python3.9[33708]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:35:54 compute-0 systemd[1]: Reloading.
Dec 13 03:35:54 compute-0 systemd-rc-local-generator[33905]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:35:54 compute-0 systemd[1]: Starting dnf makecache...
Dec 13 03:35:54 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec 13 03:35:54 compute-0 dnf[33915]: Failed determining last makecache time.
Dec 13 03:35:54 compute-0 dnf[33915]: delorean-openstack-barbican-42b4c41831408a8e323 168 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 206 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-cinder-1c00d6490d88e436f26ef 212 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-stevedore-c4acc5639fd2329372142 208 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 systemd[1]: Reloading.
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-cloudkitty-tests-tempest-2c80f8 217 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-os-refresh-config-9bfc52b5049be2d8de61 204 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 203 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 systemd-rc-local-generator[33955]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-designate-tests-tempest-347fdbc 161 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-glance-1fd12c29b339f30fe823e 173 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 224 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-manila-3c01b7181572c95dac462 210 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-whitebox-neutron-tests-tempest- 195 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-octavia-ba397f07a7331190208c 192 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-watcher-c014f81a8647287f6dcc 173 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-ansible-config_template-5ccaa22121a7ff 208 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 196 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-swift-dc98a8463506ac520c469a 195 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-python-tempestconf-8515371b7cceebd4282 198 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 dnf[33915]: delorean-openstack-heat-ui-013accbfd179753bc3f0 215 kB/s | 3.0 kB     00:00
Dec 13 03:35:55 compute-0 systemd[1]: Reloading.
Dec 13 03:35:55 compute-0 systemd-rc-local-generator[34006]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:35:55 compute-0 dnf[33915]: CentOS Stream 9 - BaseOS                         73 kB/s | 7.3 kB     00:00
Dec 13 03:35:55 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Dec 13 03:35:55 compute-0 dnf[33915]: CentOS Stream 9 - AppStream                      78 kB/s | 7.8 kB     00:00
Dec 13 03:35:55 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:35:55 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:35:55 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:35:55 compute-0 dnf[33915]: CentOS Stream 9 - CRB                            76 kB/s | 7.2 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: CentOS Stream 9 - Extras packages               6.4 kB/s | 8.3 kB     00:01
Dec 13 03:35:57 compute-0 dnf[33915]: dlrn-antelope-testing                           168 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: dlrn-antelope-build-deps                        175 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: centos9-rabbitmq                                 25 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: centos9-storage                                  94 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: centos9-opstools                                144 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: NFV SIG OpenvSwitch                              69 kB/s | 3.0 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: repo-setup-centos-appstream                     144 kB/s | 4.4 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: repo-setup-centos-baseos                        161 kB/s | 3.9 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: repo-setup-centos-highavailability              146 kB/s | 3.9 kB     00:00
Dec 13 03:35:57 compute-0 dnf[33915]: repo-setup-centos-powertools                    180 kB/s | 4.3 kB     00:00
Dec 13 03:35:58 compute-0 dnf[33915]: Extra Packages for Enterprise Linux 9 - x86_64   35 kB/s |  11 kB     00:00
Dec 13 03:35:58 compute-0 dnf[33915]: Metadata cache created.
Dec 13 03:35:58 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec 13 03:35:58 compute-0 systemd[1]: Finished dnf makecache.
Dec 13 03:35:58 compute-0 systemd[1]: dnf-makecache.service: Consumed 1.843s CPU time.
Dec 13 03:37:01 compute-0 kernel: SELinux:  Converting 2719 SID table entries...
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:37:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:37:02 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec 13 03:37:02 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:37:02 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:37:02 compute-0 systemd[1]: Reloading.
Dec 13 03:37:02 compute-0 systemd-rc-local-generator[34370]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:37:02 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:37:02 compute-0 sudo[33706]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:03 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:37:03 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:37:03 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.089s CPU time.
Dec 13 03:37:03 compute-0 systemd[1]: run-r0fa8a089791c4241af00b3b61a8b33e1.service: Deactivated successfully.
Dec 13 03:37:03 compute-0 sudo[35279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdywrduartaipniwnivvtsnipwitvmyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597023.0743442-165-148594156046407/AnsiballZ_command.py'
Dec 13 03:37:03 compute-0 sudo[35279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:03 compute-0 python3.9[35281]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:04 compute-0 sudo[35279]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:04 compute-0 sudo[35560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndlilaqxzhvaqbukpbwnhyzrosaipemn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597024.4368682-173-146588766868801/AnsiballZ_selinux.py'
Dec 13 03:37:04 compute-0 sudo[35560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:05 compute-0 python3.9[35562]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 03:37:05 compute-0 sudo[35560]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:05 compute-0 sudo[35712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwkqawjanybeykckcsjvhkjpnoszxfxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597025.539949-184-124346865432186/AnsiballZ_command.py'
Dec 13 03:37:05 compute-0 sudo[35712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:05 compute-0 python3.9[35714]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 03:37:09 compute-0 sudo[35712]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:10 compute-0 sudo[35865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izrrvkqfvsafrdzhwgorcjsiwixxfmqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597030.1826181-192-218429932457873/AnsiballZ_file.py'
Dec 13 03:37:10 compute-0 sudo[35865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:11 compute-0 python3.9[35867]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:37:11 compute-0 sudo[35865]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:11 compute-0 sudo[36017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koetnankmtcomcvrqrsnkzoahlwqshly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597031.4433763-200-67267097766000/AnsiballZ_mount.py'
Dec 13 03:37:11 compute-0 sudo[36017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:12 compute-0 python3.9[36019]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 03:37:12 compute-0 sudo[36017]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:13 compute-0 sudo[36169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seasqywaxgrxvraemfzwnytgtpjczpio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597032.8317406-228-6200408462084/AnsiballZ_file.py'
Dec 13 03:37:13 compute-0 sudo[36169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:13 compute-0 python3.9[36171]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:37:13 compute-0 sudo[36169]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:13 compute-0 sudo[36321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ervmjzaicsluxzwpfcutfuhqvjjimgib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597033.4616616-236-50012868388566/AnsiballZ_stat.py'
Dec 13 03:37:13 compute-0 sudo[36321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:13 compute-0 python3.9[36323]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:37:13 compute-0 sudo[36321]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:14 compute-0 sudo[36444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtdrxvjgwuthzgbgkxiiraqulsoyavts ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597033.4616616-236-50012868388566/AnsiballZ_copy.py'
Dec 13 03:37:14 compute-0 sudo[36444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:14 compute-0 python3.9[36446]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597033.4616616-236-50012868388566/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:37:14 compute-0 sudo[36444]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:15 compute-0 sudo[36596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxnrmuttgpbispceeqyicgiepdrbzpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597034.9142256-260-266734815418350/AnsiballZ_stat.py'
Dec 13 03:37:15 compute-0 sudo[36596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:15 compute-0 python3.9[36598]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:37:15 compute-0 sudo[36596]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:15 compute-0 sudo[36748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkpzlzpowiojxcjjbctiohsvapssljxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597035.509085-268-213281084860383/AnsiballZ_command.py'
Dec 13 03:37:15 compute-0 sudo[36748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:18 compute-0 python3.9[36750]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:18 compute-0 sudo[36748]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:18 compute-0 sudo[36902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umdsyeggpvwbyijjdkusdnyzzxiynhzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597038.5490472-276-89056076864535/AnsiballZ_file.py'
Dec 13 03:37:18 compute-0 sudo[36902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:21 compute-0 python3.9[36904]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:37:21 compute-0 sudo[36902]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:22 compute-0 sudo[37054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjctushzwdyrsqurorgdohuujqaebgyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597041.599008-287-75864670782593/AnsiballZ_getent.py'
Dec 13 03:37:22 compute-0 sudo[37054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:22 compute-0 python3.9[37056]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 03:37:22 compute-0 sudo[37054]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:22 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:37:22 compute-0 sudo[37208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlhevlzbotdgxbhpcxwlzzttzmafzfpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597042.4022028-295-178684458767265/AnsiballZ_group.py'
Dec 13 03:37:22 compute-0 sudo[37208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:23 compute-0 python3.9[37210]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 03:37:23 compute-0 groupadd[37211]: group added to /etc/group: name=qemu, GID=107
Dec 13 03:37:23 compute-0 groupadd[37211]: group added to /etc/gshadow: name=qemu
Dec 13 03:37:23 compute-0 groupadd[37211]: new group: name=qemu, GID=107
Dec 13 03:37:23 compute-0 sudo[37208]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:23 compute-0 sudo[37366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kknlhonajjokjjnatusdgplqubvzcopx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597043.387014-303-79945545612258/AnsiballZ_user.py'
Dec 13 03:37:23 compute-0 sudo[37366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:24 compute-0 python3.9[37368]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 03:37:24 compute-0 useradd[37370]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 03:37:24 compute-0 sudo[37366]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:24 compute-0 sudo[37526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwymztnqufqebqkhvcpbuzmchlovluaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597044.369404-311-199077003234673/AnsiballZ_getent.py'
Dec 13 03:37:24 compute-0 sudo[37526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:24 compute-0 python3.9[37528]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 03:37:24 compute-0 sudo[37526]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:25 compute-0 sudo[37679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slrfixwqbtafumqecycdedxhwypaelar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597044.9969962-319-6893259805877/AnsiballZ_group.py'
Dec 13 03:37:25 compute-0 sudo[37679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:25 compute-0 python3.9[37681]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 03:37:25 compute-0 groupadd[37682]: group added to /etc/group: name=hugetlbfs, GID=42477
Dec 13 03:37:25 compute-0 groupadd[37682]: group added to /etc/gshadow: name=hugetlbfs
Dec 13 03:37:25 compute-0 groupadd[37682]: new group: name=hugetlbfs, GID=42477
Dec 13 03:37:25 compute-0 sudo[37679]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:25 compute-0 sudo[37837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhxxidhhwzpnsvdghmmfgnpxjbrexbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597045.728822-328-155861527663587/AnsiballZ_file.py'
Dec 13 03:37:26 compute-0 sudo[37837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:26 compute-0 python3.9[37839]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 03:37:26 compute-0 sudo[37837]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:26 compute-0 sudo[37989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbkksamxycxavcoxqvbeknfanmesxtna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597046.5416-339-92314915585644/AnsiballZ_dnf.py'
Dec 13 03:37:26 compute-0 sudo[37989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:27 compute-0 python3.9[37991]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:37:30 compute-0 sudo[37989]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:30 compute-0 sudo[38142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nufdesjahdolrclsgknpprtruesfbixe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597050.3271086-347-149952990488802/AnsiballZ_file.py'
Dec 13 03:37:30 compute-0 sudo[38142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:30 compute-0 python3.9[38144]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:37:30 compute-0 sudo[38142]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:31 compute-0 sudo[38294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uckvwkcxgupibtxcfkjoixchljztcqax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597050.9640532-355-206709037653803/AnsiballZ_stat.py'
Dec 13 03:37:31 compute-0 sudo[38294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:31 compute-0 python3.9[38296]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:37:31 compute-0 sudo[38294]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:31 compute-0 sudo[38417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigpiaavmzckvcxqquoiotceamnnnvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597050.9640532-355-206709037653803/AnsiballZ_copy.py'
Dec 13 03:37:31 compute-0 sudo[38417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:31 compute-0 python3.9[38419]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597050.9640532-355-206709037653803/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:37:31 compute-0 sudo[38417]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:32 compute-0 sudo[38569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbqwmohzpguvswzuvnedwduyhnadyafk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597052.1380785-370-62441110833076/AnsiballZ_systemd.py'
Dec 13 03:37:32 compute-0 sudo[38569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:33 compute-0 python3.9[38571]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:37:33 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 03:37:33 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec 13 03:37:33 compute-0 kernel: Bridge firewalling registered
Dec 13 03:37:33 compute-0 systemd-modules-load[38575]: Inserted module 'br_netfilter'
Dec 13 03:37:33 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 03:37:33 compute-0 sudo[38569]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:33 compute-0 sudo[38728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugoibhgixovfvzjgzlpaavjstbvbutyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597053.323603-378-135357090839756/AnsiballZ_stat.py'
Dec 13 03:37:33 compute-0 sudo[38728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:33 compute-0 python3.9[38730]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:37:33 compute-0 sudo[38728]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:34 compute-0 sudo[38851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfxjohgqwosxshrjzhbovvxtftudsiqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597053.323603-378-135357090839756/AnsiballZ_copy.py'
Dec 13 03:37:34 compute-0 sudo[38851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:34 compute-0 python3.9[38853]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597053.323603-378-135357090839756/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:37:34 compute-0 sudo[38851]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:35 compute-0 sudo[39003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajoswpcvykyiaapcocnmtbhnmskrnisw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597054.7722542-396-60646338393811/AnsiballZ_dnf.py'
Dec 13 03:37:35 compute-0 sudo[39003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:35 compute-0 python3.9[39005]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:37:37 compute-0 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Dec 13 03:37:37 compute-0 irqbalance[789]: IRQ 26 affinity is now unmanaged
Dec 13 03:37:39 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:37:39 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:37:39 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:37:39 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:37:39 compute-0 systemd[1]: Reloading.
Dec 13 03:37:39 compute-0 systemd-rc-local-generator[39068]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:37:39 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:37:40 compute-0 sudo[39003]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:41 compute-0 python3.9[40321]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:37:41 compute-0 python3.9[41258]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 03:37:42 compute-0 python3.9[42125]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:37:43 compute-0 sudo[42965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwmuuvntscgpkbzcdbogbrflzdapqsqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597062.843636-435-157385410501234/AnsiballZ_command.py'
Dec 13 03:37:43 compute-0 sudo[42965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:43 compute-0 python3.9[42985]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:43 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 03:37:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:37:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:37:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.831s CPU time.
Dec 13 03:37:43 compute-0 systemd[1]: run-re77565c5ae7a45a0a7120b6e7bce65b7.service: Deactivated successfully.
Dec 13 03:37:43 compute-0 systemd[1]: Starting Authorization Manager...
Dec 13 03:37:43 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 03:37:43 compute-0 polkitd[43382]: Started polkitd version 0.117
Dec 13 03:37:43 compute-0 polkitd[43382]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 03:37:43 compute-0 polkitd[43382]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 03:37:43 compute-0 polkitd[43382]: Finished loading, compiling and executing 2 rules
Dec 13 03:37:43 compute-0 systemd[1]: Started Authorization Manager.
Dec 13 03:37:43 compute-0 polkitd[43382]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Dec 13 03:37:43 compute-0 sudo[42965]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:44 compute-0 sudo[43550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdtsddrstyjgraucprhykjpcfxtazsmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597064.11105-444-240153248072087/AnsiballZ_systemd.py'
Dec 13 03:37:44 compute-0 sudo[43550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:44 compute-0 python3.9[43552]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:37:44 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 03:37:44 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 03:37:44 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 03:37:44 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 03:37:45 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 03:37:45 compute-0 sudo[43550]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:45 compute-0 python3.9[43714]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 03:37:47 compute-0 sudo[43864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxglhuvxtjkpvzcjajrfdggqgotyyzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597067.6602333-501-169810952642730/AnsiballZ_systemd.py'
Dec 13 03:37:47 compute-0 sudo[43864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:48 compute-0 python3.9[43866]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:37:48 compute-0 systemd[1]: Reloading.
Dec 13 03:37:48 compute-0 systemd-rc-local-generator[43895]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:37:48 compute-0 sudo[43864]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:49 compute-0 sudo[44052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzddxhxqfibhwzlkvwwxmgzesjtqzutr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597068.7481332-501-124292147118950/AnsiballZ_systemd.py'
Dec 13 03:37:49 compute-0 sudo[44052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:49 compute-0 python3.9[44054]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:37:49 compute-0 systemd[1]: Reloading.
Dec 13 03:37:49 compute-0 systemd-rc-local-generator[44083]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:37:49 compute-0 sudo[44052]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:50 compute-0 sudo[44241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ylalixgezlkvzcuyzfwhdspdpbdwrqdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597069.8511333-517-133489500551295/AnsiballZ_command.py'
Dec 13 03:37:50 compute-0 sudo[44241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:50 compute-0 python3.9[44243]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:50 compute-0 sudo[44241]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:50 compute-0 sudo[44394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaucofldrrcentkzkcugzxfhspjoispv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597070.5199652-525-272387945945430/AnsiballZ_command.py'
Dec 13 03:37:50 compute-0 sudo[44394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:51 compute-0 python3.9[44396]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:51 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec 13 03:37:51 compute-0 sudo[44394]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:51 compute-0 sudo[44547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqhgitqxplbtmtbtqxarolxotrttobw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597071.211079-533-123728371529046/AnsiballZ_command.py'
Dec 13 03:37:51 compute-0 sudo[44547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:51 compute-0 python3.9[44549]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:53 compute-0 sudo[44547]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:53 compute-0 sudo[44709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzpyxftyfcbjflzbsbeqvddsgivpquiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597073.2828746-541-73829328844932/AnsiballZ_command.py'
Dec 13 03:37:53 compute-0 sudo[44709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:53 compute-0 python3.9[44711]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:37:53 compute-0 sudo[44709]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:54 compute-0 sudo[44862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biuratumbyezuzdzrfiqpfkzvkligxdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597073.9087725-549-56341870328014/AnsiballZ_systemd.py'
Dec 13 03:37:54 compute-0 sudo[44862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:37:54 compute-0 python3.9[44864]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:37:54 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec 13 03:37:54 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Dec 13 03:37:54 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Dec 13 03:37:54 compute-0 systemd[1]: Starting Apply Kernel Variables...
Dec 13 03:37:54 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec 13 03:37:54 compute-0 systemd[1]: Finished Apply Kernel Variables.
Dec 13 03:37:54 compute-0 sudo[44862]: pam_unix(sudo:session): session closed for user root
Dec 13 03:37:55 compute-0 sshd-session[31243]: Connection closed by 192.168.122.30 port 35818
Dec 13 03:37:55 compute-0 sshd-session[31240]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:37:55 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Dec 13 03:37:55 compute-0 systemd[1]: session-9.scope: Consumed 2min 29.336s CPU time.
Dec 13 03:37:55 compute-0 systemd-logind[796]: Session 9 logged out. Waiting for processes to exit.
Dec 13 03:37:55 compute-0 systemd-logind[796]: Removed session 9.
Dec 13 03:38:00 compute-0 sshd-session[44894]: Accepted publickey for zuul from 192.168.122.30 port 43244 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:38:00 compute-0 systemd-logind[796]: New session 10 of user zuul.
Dec 13 03:38:00 compute-0 systemd[1]: Started Session 10 of User zuul.
Dec 13 03:38:00 compute-0 sshd-session[44894]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:38:01 compute-0 python3.9[45047]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:38:02 compute-0 sudo[45201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuxqydhbxmanruqicwhjbihproisustj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597082.4443648-36-147548906749055/AnsiballZ_getent.py'
Dec 13 03:38:02 compute-0 sudo[45201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:03 compute-0 python3.9[45203]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 03:38:03 compute-0 sudo[45201]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:03 compute-0 sudo[45354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzqadxkxayblirsohhohxkqciolfeelt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597083.2901838-44-195965863112243/AnsiballZ_group.py'
Dec 13 03:38:03 compute-0 sudo[45354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:03 compute-0 python3.9[45356]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 03:38:03 compute-0 groupadd[45357]: group added to /etc/group: name=openvswitch, GID=42476
Dec 13 03:38:03 compute-0 groupadd[45357]: group added to /etc/gshadow: name=openvswitch
Dec 13 03:38:03 compute-0 groupadd[45357]: new group: name=openvswitch, GID=42476
Dec 13 03:38:03 compute-0 sudo[45354]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:04 compute-0 sudo[45512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euemudneqfefrarnzzepfovkcdhhfsfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597084.1727467-52-29285008008469/AnsiballZ_user.py'
Dec 13 03:38:04 compute-0 sudo[45512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:04 compute-0 python3.9[45514]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 03:38:04 compute-0 useradd[45516]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 03:38:04 compute-0 useradd[45516]: add 'openvswitch' to group 'hugetlbfs'
Dec 13 03:38:04 compute-0 useradd[45516]: add 'openvswitch' to shadow group 'hugetlbfs'
Dec 13 03:38:04 compute-0 sudo[45512]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:05 compute-0 sudo[45672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxriyjewmyfxidtegovqtzghfjleztop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597085.1793706-62-157450176898390/AnsiballZ_setup.py'
Dec 13 03:38:05 compute-0 sudo[45672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:05 compute-0 python3.9[45674]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:38:05 compute-0 sudo[45672]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:06 compute-0 sudo[45756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxupyfkmyvhzgftgdmcnvyiupljyiedj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597085.1793706-62-157450176898390/AnsiballZ_dnf.py'
Dec 13 03:38:06 compute-0 sudo[45756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:06 compute-0 python3.9[45758]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 03:38:09 compute-0 sudo[45756]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:09 compute-0 sudo[45921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdjzprbwofbqclnuvyifxxdccjzfvghg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597089.567869-76-275305655356922/AnsiballZ_dnf.py'
Dec 13 03:38:09 compute-0 sudo[45921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:10 compute-0 python3.9[45923]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:38:23 compute-0 kernel: SELinux:  Converting 2731 SID table entries...
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:38:23 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:38:24 compute-0 groupadd[45946]: group added to /etc/group: name=unbound, GID=993
Dec 13 03:38:24 compute-0 groupadd[45946]: group added to /etc/gshadow: name=unbound
Dec 13 03:38:24 compute-0 groupadd[45946]: new group: name=unbound, GID=993
Dec 13 03:38:24 compute-0 useradd[45953]: new user: name=unbound, UID=993, GID=993, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Dec 13 03:38:24 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec 13 03:38:24 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec 13 03:38:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:38:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:38:25 compute-0 systemd[1]: Reloading.
Dec 13 03:38:25 compute-0 systemd-rc-local-generator[46450]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:38:25 compute-0 systemd-sysv-generator[46453]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:38:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:38:26 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:38:26 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:38:26 compute-0 systemd[1]: run-r35bdc2b204a24eb8bd606dc70cdcdfb2.service: Deactivated successfully.
Dec 13 03:38:26 compute-0 sudo[45921]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:27 compute-0 sudo[47019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oajdzsyegsggagurqkjeroxdkdvpkkmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597106.8535278-84-41548417030204/AnsiballZ_systemd.py'
Dec 13 03:38:27 compute-0 sudo[47019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:27 compute-0 python3.9[47021]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:38:27 compute-0 systemd[1]: Reloading.
Dec 13 03:38:27 compute-0 systemd-rc-local-generator[47049]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:38:27 compute-0 systemd-sysv-generator[47053]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:38:28 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Dec 13 03:38:28 compute-0 chown[47063]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec 13 03:38:28 compute-0 ovs-ctl[47068]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec 13 03:38:28 compute-0 ovs-ctl[47068]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec 13 03:38:28 compute-0 ovs-ctl[47068]: Starting ovsdb-server [  OK  ]
Dec 13 03:38:28 compute-0 ovs-vsctl[47117]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec 13 03:38:28 compute-0 ovs-vsctl[47137]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9c764fca-6428-461c-aead-7964805997a5\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec 13 03:38:28 compute-0 ovs-ctl[47068]: Configuring Open vSwitch system IDs [  OK  ]
Dec 13 03:38:28 compute-0 ovs-vsctl[47143]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 03:38:28 compute-0 ovs-ctl[47068]: Enabling remote OVSDB managers [  OK  ]
Dec 13 03:38:28 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Dec 13 03:38:28 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec 13 03:38:28 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec 13 03:38:28 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec 13 03:38:28 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Dec 13 03:38:28 compute-0 ovs-ctl[47187]: Inserting openvswitch module [  OK  ]
Dec 13 03:38:28 compute-0 ovs-ctl[47156]: Starting ovs-vswitchd [  OK  ]
Dec 13 03:38:28 compute-0 ovs-vsctl[47208]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec 13 03:38:28 compute-0 ovs-ctl[47156]: Enabling remote OVSDB managers [  OK  ]
Dec 13 03:38:28 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec 13 03:38:28 compute-0 systemd[1]: Starting Open vSwitch...
Dec 13 03:38:28 compute-0 systemd[1]: Finished Open vSwitch.
Dec 13 03:38:28 compute-0 sudo[47019]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:29 compute-0 python3.9[47360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:38:30 compute-0 sudo[47510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjzuabfcxkpmtsyklzcbqthluexwggyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597109.7708354-102-19461408557480/AnsiballZ_sefcontext.py'
Dec 13 03:38:30 compute-0 sudo[47510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:30 compute-0 python3.9[47512]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 03:38:31 compute-0 kernel: SELinux:  Converting 2745 SID table entries...
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:38:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:38:31 compute-0 sudo[47510]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:32 compute-0 python3.9[47667]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:38:33 compute-0 sudo[47823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlrurhhhutuuauwfovbnpaduggqmdmvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597113.1125388-120-26532820951224/AnsiballZ_dnf.py'
Dec 13 03:38:33 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec 13 03:38:33 compute-0 sudo[47823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:33 compute-0 python3.9[47825]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:38:35 compute-0 sudo[47823]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:36 compute-0 sudo[47976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dplpizwlmdokaamdjdnjdvxubaltgjei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597115.6538498-128-58938781769018/AnsiballZ_command.py'
Dec 13 03:38:36 compute-0 sudo[47976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:36 compute-0 python3.9[47978]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:38:37 compute-0 sudo[47976]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:37 compute-0 sudo[48263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoyebuayijwatmjweflhxearyfhezkoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597117.2040217-136-215688020952290/AnsiballZ_file.py'
Dec 13 03:38:37 compute-0 sudo[48263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:37 compute-0 python3.9[48265]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 03:38:37 compute-0 sudo[48263]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:38 compute-0 python3.9[48415]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:38:39 compute-0 sudo[48567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeudkmnbplbrtaurfvinzaozfqacatpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597118.824238-152-49638927325955/AnsiballZ_dnf.py'
Dec 13 03:38:39 compute-0 sudo[48567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:39 compute-0 python3.9[48569]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:38:41 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:38:41 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:38:41 compute-0 systemd[1]: Reloading.
Dec 13 03:38:41 compute-0 systemd-rc-local-generator[48602]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:38:41 compute-0 systemd-sysv-generator[48606]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:38:41 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:38:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:38:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:38:41 compute-0 systemd[1]: run-r6ad5ac44196642d6b54c107013a4b5fd.service: Deactivated successfully.
Dec 13 03:38:42 compute-0 sudo[48567]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:42 compute-0 sudo[48883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxzmvggcqravpmgdgsrxcakfthlkgeht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597122.2735178-160-118576548596866/AnsiballZ_systemd.py'
Dec 13 03:38:42 compute-0 sudo[48883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:42 compute-0 python3.9[48885]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:38:42 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec 13 03:38:42 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Dec 13 03:38:42 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Dec 13 03:38:42 compute-0 systemd[1]: Stopping Network Manager...
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8361] caught SIGTERM, shutting down normally.
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8380] dhcp4 (eth0): canceled DHCP transaction
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8380] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8380] dhcp4 (eth0): state changed no lease
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8385] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 03:38:42 compute-0 NetworkManager[7186]: <info>  [1765597122.8477] exiting (success)
Dec 13 03:38:42 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:38:42 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:38:42 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec 13 03:38:42 compute-0 systemd[1]: Stopped Network Manager.
Dec 13 03:38:42 compute-0 systemd[1]: NetworkManager.service: Consumed 13.282s CPU time, 4.1M memory peak, read 0B from disk, written 28.5K to disk.
Dec 13 03:38:42 compute-0 systemd[1]: Starting Network Manager...
Dec 13 03:38:42 compute-0 NetworkManager[48899]: <info>  [1765597122.9277] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:9ad30d9f-3581-40c3-b7be-ea7e23726ec3)
Dec 13 03:38:42 compute-0 NetworkManager[48899]: <info>  [1765597122.9278] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec 13 03:38:42 compute-0 NetworkManager[48899]: <info>  [1765597122.9331] manager[0x561e62e2d000]: monitoring kernel firmware directory '/lib/firmware'.
Dec 13 03:38:42 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 03:38:43 compute-0 systemd[1]: Started Hostname Service.
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0103] hostname: hostname: using hostnamed
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0104] hostname: static hostname changed from (none) to "compute-0"
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0109] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0114] manager[0x561e62e2d000]: rfkill: Wi-Fi hardware radio set enabled
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0114] manager[0x561e62e2d000]: rfkill: WWAN hardware radio set enabled
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0133] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0141] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0142] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0142] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0143] manager: Networking is enabled by state file
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0145] settings: Loaded settings plugin: keyfile (internal)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0147] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0168] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0176] dhcp: init: Using DHCP client 'internal'
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0178] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0183] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0187] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0193] device (lo): Activation: starting connection 'lo' (1b002e4b-157a-4f55-90b2-cce7be34ec02)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0201] device (eth0): carrier: link connected
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0204] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0208] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0208] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0214] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0218] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0223] device (eth1): carrier: link connected
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0226] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0230] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353) (indicated)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0231] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0234] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0240] device (eth1): Activation: starting connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0246] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0253] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 systemd[1]: Started Network Manager.
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0255] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0263] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0266] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0270] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0273] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0277] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0282] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0287] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0290] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0300] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0313] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0323] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0326] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0329] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0333] device (lo): Activation: successful, device activated.
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0342] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec 13 03:38:43 compute-0 systemd[1]: Starting Network Manager Wait Online...
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0407] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0413] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0415] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0418] manager: NetworkManager state is now CONNECTED_LOCAL
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0421] device (eth1): Activation: successful, device activated.
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0459] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0462] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0466] manager: NetworkManager state is now CONNECTED_SITE
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0468] device (eth0): Activation: successful, device activated.
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0472] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec 13 03:38:43 compute-0 NetworkManager[48899]: <info>  [1765597123.0475] manager: startup complete
Dec 13 03:38:43 compute-0 systemd[1]: Finished Network Manager Wait Online.
Dec 13 03:38:43 compute-0 sudo[48883]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:43 compute-0 sudo[49110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azjmioeohxylitjjbmmderolvomabkat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597123.2319584-168-67462458263560/AnsiballZ_dnf.py'
Dec 13 03:38:43 compute-0 sudo[49110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:43 compute-0 python3.9[49112]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:38:49 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:38:49 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:38:49 compute-0 systemd[1]: Reloading.
Dec 13 03:38:49 compute-0 systemd-sysv-generator[49164]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:38:49 compute-0 systemd-rc-local-generator[49161]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:38:49 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:38:50 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:38:50 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:38:50 compute-0 systemd[1]: run-r54fb3eef0af3429a9db482a2dcdbbf76.service: Deactivated successfully.
Dec 13 03:38:50 compute-0 sudo[49110]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:51 compute-0 sudo[49569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkopxypzlvgqrzcqqgchhktbwjquiosq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597130.9852521-180-44793218485624/AnsiballZ_stat.py'
Dec 13 03:38:51 compute-0 sudo[49569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:51 compute-0 python3.9[49571]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:38:51 compute-0 sudo[49569]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:52 compute-0 sudo[49721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqwxywfjosbnjcujeacpavpnteyuuoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597131.7417212-189-241624170627580/AnsiballZ_ini_file.py'
Dec 13 03:38:52 compute-0 sudo[49721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:52 compute-0 python3.9[49723]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:52 compute-0 sudo[49721]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:52 compute-0 sudo[49875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxikxkgwljhtydfyjdsyksuhffbfjcye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597132.6068535-199-104894728874134/AnsiballZ_ini_file.py'
Dec 13 03:38:52 compute-0 sudo[49875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:53 compute-0 python3.9[49877]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:53 compute-0 sudo[49875]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:53 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:38:53 compute-0 sudo[50028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgmatkrfzawsccncinvjgqvepdjbhmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597133.1399813-199-230471592584670/AnsiballZ_ini_file.py'
Dec 13 03:38:53 compute-0 sudo[50028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:53 compute-0 python3.9[50030]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:53 compute-0 sudo[50028]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:54 compute-0 sudo[50180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqzuclisuswbwpbyhykbyblcdxzsirfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597133.739892-214-227443870679800/AnsiballZ_ini_file.py'
Dec 13 03:38:54 compute-0 sudo[50180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:54 compute-0 python3.9[50182]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:54 compute-0 sudo[50180]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:54 compute-0 sudo[50332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fclsaufrotfnovimksperjsmskkpybhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597134.3655596-214-59770429289159/AnsiballZ_ini_file.py'
Dec 13 03:38:54 compute-0 sudo[50332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:54 compute-0 python3.9[50334]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:54 compute-0 sudo[50332]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:55 compute-0 sudo[50484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weebcummxtmsjcugkghbpntvzjoiovqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597134.9527638-229-163436208451175/AnsiballZ_stat.py'
Dec 13 03:38:55 compute-0 sudo[50484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:55 compute-0 python3.9[50486]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:38:55 compute-0 sudo[50484]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:55 compute-0 sudo[50607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrewgftaycyefzlerqbpfufybnkhievy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597134.9527638-229-163436208451175/AnsiballZ_copy.py'
Dec 13 03:38:55 compute-0 sudo[50607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:56 compute-0 python3.9[50609]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597134.9527638-229-163436208451175/.source _original_basename=.lady6peo follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:56 compute-0 sudo[50607]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:56 compute-0 sudo[50759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipbysmgnnhpduoniszzdpbghfvxkgeob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597136.263367-244-16169134394219/AnsiballZ_file.py'
Dec 13 03:38:56 compute-0 sudo[50759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:56 compute-0 python3.9[50761]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:56 compute-0 sudo[50759]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:57 compute-0 sudo[50911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlfsmembiddjovuhciypogdimpbjvapk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597136.938964-252-254428891939728/AnsiballZ_edpm_os_net_config_mappings.py'
Dec 13 03:38:57 compute-0 sudo[50911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:57 compute-0 python3.9[50913]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec 13 03:38:57 compute-0 sudo[50911]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:57 compute-0 sudo[51063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hipdjaqhmypllnjbssatlxcouemfxbgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597137.7219934-261-61742822293162/AnsiballZ_file.py'
Dec 13 03:38:57 compute-0 sudo[51063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:58 compute-0 python3.9[51065]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:38:58 compute-0 sudo[51063]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:58 compute-0 sudo[51215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsayqavfveugmzdmgxhzqnuraxpeteuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597138.4658282-271-116699145513044/AnsiballZ_stat.py'
Dec 13 03:38:58 compute-0 sudo[51215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:58 compute-0 sudo[51215]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:59 compute-0 sudo[51338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nebtsaumaixhssxcjvnveafrrcydzqbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597138.4658282-271-116699145513044/AnsiballZ_copy.py'
Dec 13 03:38:59 compute-0 sudo[51338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:38:59 compute-0 sudo[51338]: pam_unix(sudo:session): session closed for user root
Dec 13 03:38:59 compute-0 sudo[51490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqwplwhpymktnnzorrjoxmseouttzbnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597139.5913374-286-244166725032802/AnsiballZ_slurp.py'
Dec 13 03:38:59 compute-0 sudo[51490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:00 compute-0 python3.9[51492]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec 13 03:39:00 compute-0 sudo[51490]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:01 compute-0 sudo[51665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijyaknxrhzitefxvkkebytfytmdsntfm ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597140.404542-295-115101347396130/async_wrapper.py j951989478756 300 /home/zuul/.ansible/tmp/ansible-tmp-1765597140.404542-295-115101347396130/AnsiballZ_edpm_os_net_config.py _'
Dec 13 03:39:01 compute-0 sudo[51665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:01 compute-0 ansible-async_wrapper.py[51667]: Invoked with j951989478756 300 /home/zuul/.ansible/tmp/ansible-tmp-1765597140.404542-295-115101347396130/AnsiballZ_edpm_os_net_config.py _
Dec 13 03:39:01 compute-0 ansible-async_wrapper.py[51670]: Starting module and watcher
Dec 13 03:39:01 compute-0 ansible-async_wrapper.py[51670]: Start watching 51671 (300)
Dec 13 03:39:01 compute-0 ansible-async_wrapper.py[51671]: Start module (51671)
Dec 13 03:39:01 compute-0 ansible-async_wrapper.py[51667]: Return async_wrapper task started.
Dec 13 03:39:01 compute-0 sudo[51665]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:01 compute-0 python3.9[51672]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec 13 03:39:02 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec 13 03:39:02 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec 13 03:39:02 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec 13 03:39:02 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec 13 03:39:02 compute-0 kernel: cfg80211: failed to load regulatory.db
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.0855] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.0877] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1471] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1473] audit: op="connection-add" uuid="a1fd634c-4f98-492a-99ed-517d3ad49a26" name="br-ex-br" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1489] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1491] audit: op="connection-add" uuid="36b296d4-3795-4ba0-ac0b-75ff22b82628" name="br-ex-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1502] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1504] audit: op="connection-add" uuid="cef319f2-d1b5-47a3-98f9-375696e3e1ec" name="eth1-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1516] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1517] audit: op="connection-add" uuid="10489818-d501-443e-8aa0-a60ac13cebb7" name="vlan20-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1528] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1530] audit: op="connection-add" uuid="8b60a567-280b-483f-9deb-93f2a1355bd0" name="vlan21-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1540] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1542] audit: op="connection-add" uuid="20346c9c-bc67-4a95-8503-c208e67980b8" name="vlan22-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1553] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1555] audit: op="connection-add" uuid="23a5ff53-5ddc-4bf9-b0e0-a1bc26d31275" name="vlan23-port" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1576] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1592] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1594] audit: op="connection-add" uuid="3be1664b-15b9-4447-8fc4-5badd33a0421" name="br-ex-if" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1639] audit: op="connection-update" uuid="93f1a750-c0eb-54ea-a4c2-accf79de8353" name="ci-private-network" args="connection.port-type,connection.slave-type,connection.controller,connection.master,connection.timestamp,ipv4.dns,ipv4.never-default,ipv4.addresses,ipv4.routing-rules,ipv4.method,ipv4.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.addresses,ipv6.routing-rules,ipv6.method,ipv6.routes,ovs-external-ids.data,ovs-interface.type" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1655] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1656] audit: op="connection-add" uuid="118ad039-9cf3-4006-9b9a-8604a2465817" name="vlan20-if" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1672] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1674] audit: op="connection-add" uuid="6fd74218-a7c3-4f81-ac24-99a263d2c06b" name="vlan21-if" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1690] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1691] audit: op="connection-add" uuid="113d9a17-6fa2-44af-a1c1-4c8a4ae0d03b" name="vlan22-if" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1707] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1708] audit: op="connection-add" uuid="c1c96ad5-b13b-4a01-9d11-10073a5c46b2" name="vlan23-if" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1721] audit: op="connection-delete" uuid="df9cc862-e866-3605-9581-a6789d75c0d4" name="Wired connection 1" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1733] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1736] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1743] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1746] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (a1fd634c-4f98-492a-99ed-517d3ad49a26)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1747] audit: op="connection-activate" uuid="a1fd634c-4f98-492a-99ed-517d3ad49a26" name="br-ex-br" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1749] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1750] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1755] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1759] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (36b296d4-3795-4ba0-ac0b-75ff22b82628)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1761] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1763] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1767] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1771] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (cef319f2-d1b5-47a3-98f9-375696e3e1ec)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1772] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1773] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1778] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1782] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (10489818-d501-443e-8aa0-a60ac13cebb7)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1784] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1785] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1790] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1794] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8b60a567-280b-483f-9deb-93f2a1355bd0)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1796] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1798] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1802] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1806] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (20346c9c-bc67-4a95-8503-c208e67980b8)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1808] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1809] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1814] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1818] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (23a5ff53-5ddc-4bf9-b0e0-a1bc26d31275)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1818] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1821] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1823] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1829] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1830] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1833] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1837] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (3be1664b-15b9-4447-8fc4-5badd33a0421)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1838] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1842] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1844] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1846] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1847] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1857] device (eth1): disconnecting for new activation request.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1857] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1860] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1862] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1863] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1865] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1866] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1868] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1872] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (118ad039-9cf3-4006-9b9a-8604a2465817)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1872] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1875] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1876] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1877] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1880] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1881] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1884] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1887] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (6fd74218-a7c3-4f81-ac24-99a263d2c06b)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1888] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1890] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1892] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1893] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1895] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1896] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1898] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1902] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (113d9a17-6fa2-44af-a1c1-4c8a4ae0d03b)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1902] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1904] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1906] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1907] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1909] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <warn>  [1765597143.1911] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1914] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1917] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (c1c96ad5-b13b-4a01-9d11-10073a5c46b2)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1917] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1920] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1922] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1923] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1925] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1939] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-timeout,ipv4.dhcp-client-id,ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1941] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1944] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1946] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1966] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1977] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 kernel: ovs-system: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1984] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1991] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.1993] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2000] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 kernel: Timeout policy base is empty
Dec 13 03:39:03 compute-0 systemd-udevd[51677]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2006] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2010] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2013] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2018] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2025] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2028] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2030] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2036] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2043] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2047] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2050] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2056] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2062] dhcp4 (eth0): canceled DHCP transaction
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2063] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2063] dhcp4 (eth0): state changed no lease
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2065] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2083] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2089] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51673 uid=0 result="fail" reason="Device is not activated"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2096] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec 13 03:39:03 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2264] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2273] dhcp4 (eth0): state changed new lease, address=38.102.83.158
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2283] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2327] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 03:39:03 compute-0 kernel: br-ex: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2462] device (eth1): Activation: starting connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2474] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2477] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2479] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2482] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2483] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2485] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2487] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2492] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Dec 13 03:39:03 compute-0 kernel: vlan22: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2505] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 systemd-udevd[51678]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2509] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2516] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2520] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2524] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2529] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2532] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2536] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2539] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2544] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2547] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2551] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 kernel: vlan20: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2564] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2570] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2573] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2581] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 systemd-udevd[51778]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2595] device (eth1): state change: config -> deactivating (reason 'new-activation', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2597] device (eth1): released from controller device eth1
Dec 13 03:39:03 compute-0 kernel: vlan23: entered promiscuous mode
Dec 13 03:39:03 compute-0 systemd-udevd[51777]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2606] device (eth1): disconnecting for new activation request.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2607] audit: op="connection-activate" uuid="93f1a750-c0eb-54ea-a4c2-accf79de8353" name="ci-private-network" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2636] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2647] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2656] device (eth1): Activation: starting connection 'ci-private-network' (93f1a750-c0eb-54ea-a4c2-accf79de8353)
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2676] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2679] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 kernel: vlan21: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2689] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2690] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51673 uid=0 result="success"
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2704] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2715] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2717] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2725] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2744] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2757] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.2761] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4324] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4334] device (eth1): Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4343] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4355] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4360] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4364] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4374] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4375] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4391] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4397] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4401] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4406] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4414] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4420] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4449] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4456] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4494] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4495] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4496] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4501] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4507] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec 13 03:39:03 compute-0 NetworkManager[48899]: <info>  [1765597143.4515] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec 13 03:39:04 compute-0 NetworkManager[48899]: <info>  [1765597144.5859] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51673 uid=0 result="success"
Dec 13 03:39:04 compute-0 NetworkManager[48899]: <info>  [1765597144.7432] checkpoint[0x561e62e02950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec 13 03:39:04 compute-0 NetworkManager[48899]: <info>  [1765597144.7435] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51673 uid=0 result="success"
Dec 13 03:39:04 compute-0 sudo[52035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avokewdonxcmarunmrkqqvpwfrtsepbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597144.3866673-295-94818898166090/AnsiballZ_async_status.py'
Dec 13 03:39:04 compute-0 sudo[52035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.0333] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.0350] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 python3.9[52037]: ansible-ansible.legacy.async_status Invoked with jid=j951989478756.51667 mode=status _async_dir=/root/.ansible_async
Dec 13 03:39:05 compute-0 sudo[52035]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.2663] audit: op="networking-control" arg="global-dns-configuration" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.2694] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.2733] audit: op="networking-control" arg="global-dns-configuration" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.2767] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.4119] checkpoint[0x561e62e02a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec 13 03:39:05 compute-0 NetworkManager[48899]: <info>  [1765597145.4124] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51673 uid=0 result="success"
Dec 13 03:39:05 compute-0 ansible-async_wrapper.py[51671]: Module complete (51671)
Dec 13 03:39:06 compute-0 ansible-async_wrapper.py[51670]: Done in kid B.
Dec 13 03:39:08 compute-0 sudo[52140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hncokoazceuftbmdoztdkdmuvviuwuda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597144.3866673-295-94818898166090/AnsiballZ_async_status.py'
Dec 13 03:39:08 compute-0 sudo[52140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:08 compute-0 python3.9[52142]: ansible-ansible.legacy.async_status Invoked with jid=j951989478756.51667 mode=status _async_dir=/root/.ansible_async
Dec 13 03:39:08 compute-0 sudo[52140]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:08 compute-0 sudo[52240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrntocqzkqvsgmnbuozbpbtxevdmwcy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597144.3866673-295-94818898166090/AnsiballZ_async_status.py'
Dec 13 03:39:08 compute-0 sudo[52240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:08 compute-0 python3.9[52242]: ansible-ansible.legacy.async_status Invoked with jid=j951989478756.51667 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 03:39:08 compute-0 sudo[52240]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:09 compute-0 sudo[52392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkoooxeayhrhfsaxyxwaaahenwnvuifs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597149.161241-322-264223626569601/AnsiballZ_stat.py'
Dec 13 03:39:09 compute-0 sudo[52392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:09 compute-0 python3.9[52394]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:09 compute-0 sudo[52392]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:09 compute-0 sudo[52515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygswrvclzktslgptjbvgvikxiifvmoni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597149.161241-322-264223626569601/AnsiballZ_copy.py'
Dec 13 03:39:09 compute-0 sudo[52515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:10 compute-0 python3.9[52517]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597149.161241-322-264223626569601/.source.returncode _original_basename=.zzyefb0n follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:10 compute-0 sudo[52515]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:10 compute-0 sudo[52667]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npsbvkxugbfkjbycoomxkrsdvbxledey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597150.311802-338-137366202296330/AnsiballZ_stat.py'
Dec 13 03:39:10 compute-0 sudo[52667]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:10 compute-0 python3.9[52669]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:10 compute-0 sudo[52667]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:11 compute-0 sudo[52790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfukraoiudvyyfwsouozhjieraqikze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597150.311802-338-137366202296330/AnsiballZ_copy.py'
Dec 13 03:39:11 compute-0 sudo[52790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:11 compute-0 python3.9[52792]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597150.311802-338-137366202296330/.source.cfg _original_basename=.kue1rrjt follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:11 compute-0 sudo[52790]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:11 compute-0 sudo[52943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usqimrzidizmvsfznpbboqvdhpoxtqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597151.413154-353-207996539219881/AnsiballZ_systemd.py'
Dec 13 03:39:11 compute-0 sudo[52943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:11 compute-0 python3.9[52945]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:39:11 compute-0 systemd[1]: Reloading Network Manager...
Dec 13 03:39:11 compute-0 NetworkManager[48899]: <info>  [1765597151.9998] audit: op="reload" arg="0" pid=52949 uid=0 result="success"
Dec 13 03:39:12 compute-0 NetworkManager[48899]: <info>  [1765597152.0005] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec 13 03:39:12 compute-0 systemd[1]: Reloaded Network Manager.
Dec 13 03:39:12 compute-0 sudo[52943]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:12 compute-0 sshd-session[44897]: Connection closed by 192.168.122.30 port 43244
Dec 13 03:39:12 compute-0 sshd-session[44894]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:39:12 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Dec 13 03:39:12 compute-0 systemd[1]: session-10.scope: Consumed 54.154s CPU time.
Dec 13 03:39:12 compute-0 systemd-logind[796]: Session 10 logged out. Waiting for processes to exit.
Dec 13 03:39:12 compute-0 systemd-logind[796]: Removed session 10.
Dec 13 03:39:13 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec 13 03:39:17 compute-0 sshd-session[52981]: Accepted publickey for zuul from 192.168.122.30 port 51816 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:39:17 compute-0 systemd-logind[796]: New session 11 of user zuul.
Dec 13 03:39:17 compute-0 systemd[1]: Started Session 11 of User zuul.
Dec 13 03:39:17 compute-0 sshd-session[52981]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:39:18 compute-0 python3.9[53135]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:39:19 compute-0 python3.9[53289]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:39:20 compute-0 python3.9[53482]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:39:20 compute-0 sshd-session[52984]: Connection closed by 192.168.122.30 port 51816
Dec 13 03:39:20 compute-0 sshd-session[52981]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:39:20 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Dec 13 03:39:20 compute-0 systemd[1]: session-11.scope: Consumed 2.335s CPU time.
Dec 13 03:39:20 compute-0 systemd-logind[796]: Session 11 logged out. Waiting for processes to exit.
Dec 13 03:39:20 compute-0 systemd-logind[796]: Removed session 11.
Dec 13 03:39:22 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec 13 03:39:25 compute-0 sshd-session[53511]: Accepted publickey for zuul from 192.168.122.30 port 42826 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:39:26 compute-0 systemd-logind[796]: New session 12 of user zuul.
Dec 13 03:39:26 compute-0 systemd[1]: Started Session 12 of User zuul.
Dec 13 03:39:26 compute-0 sshd-session[53511]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:39:27 compute-0 python3.9[53664]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:39:27 compute-0 python3.9[53819]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:39:28 compute-0 sudo[53973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfqqnqfxrzeanienkweppsdgcgcapyjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597168.3385816-40-84492459804723/AnsiballZ_setup.py'
Dec 13 03:39:28 compute-0 sudo[53973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:28 compute-0 python3.9[53975]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:39:29 compute-0 sudo[53973]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:29 compute-0 sudo[54057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfmbkdmwpldqymwqoeujyymuilltqnbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597168.3385816-40-84492459804723/AnsiballZ_dnf.py'
Dec 13 03:39:29 compute-0 sudo[54057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:29 compute-0 python3.9[54059]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:39:31 compute-0 sudo[54057]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:31 compute-0 sudo[54211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kikluxdqrbuurxioeoqkftlvtgsohpzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597171.5935612-52-144732983631676/AnsiballZ_setup.py'
Dec 13 03:39:31 compute-0 sudo[54211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:32 compute-0 python3.9[54213]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:39:32 compute-0 sudo[54211]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:33 compute-0 sudo[54406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agmhbgfwqzffbynjawbiszpcozmcirkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597172.659292-63-253602221195524/AnsiballZ_file.py'
Dec 13 03:39:33 compute-0 sudo[54406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:33 compute-0 python3.9[54408]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:33 compute-0 sudo[54406]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:33 compute-0 sudo[54558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvalihkgnvktcddxgnltqvcohefywbgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597173.413682-71-277736125772848/AnsiballZ_command.py'
Dec 13 03:39:33 compute-0 sudo[54558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:33 compute-0 python3.9[54560]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:39:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat4162068291-merged.mount: Deactivated successfully.
Dec 13 03:39:34 compute-0 podman[54561]: 2025-12-13 03:39:34.065338131 +0000 UTC m=+0.049225600 system refresh
Dec 13 03:39:34 compute-0 sudo[54558]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:34 compute-0 sudo[54722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aorpjkzvbznhpemoudgohevhordtlkqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597174.2268329-79-205408655899482/AnsiballZ_stat.py'
Dec 13 03:39:34 compute-0 sudo[54722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:34 compute-0 python3.9[54724]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:34 compute-0 sudo[54722]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:35 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:39:35 compute-0 sudo[54845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwcylitphlijboligibgexqaceqtiwwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597174.2268329-79-205408655899482/AnsiballZ_copy.py'
Dec 13 03:39:35 compute-0 sudo[54845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:35 compute-0 python3.9[54847]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597174.2268329-79-205408655899482/.source.json follow=False _original_basename=podman_network_config.j2 checksum=e26f3f441667b5d2f2034849cd03531d322e170f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:35 compute-0 sudo[54845]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:35 compute-0 sudo[54997]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-setfqemadcjqzcvtazwqiqqliyntmoov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597175.5909412-94-154963350098386/AnsiballZ_stat.py'
Dec 13 03:39:35 compute-0 sudo[54997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:36 compute-0 python3.9[54999]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:36 compute-0 sudo[54997]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:36 compute-0 sudo[55120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dccvhlzkcenqwrdjjxszqegmfmepnotm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597175.5909412-94-154963350098386/AnsiballZ_copy.py'
Dec 13 03:39:36 compute-0 sudo[55120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:36 compute-0 python3.9[55122]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597175.5909412-94-154963350098386/.source.conf follow=False _original_basename=registries.conf.j2 checksum=1f3eae670902d81b6898b401f0bbba899d0240bf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:39:36 compute-0 sudo[55120]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:37 compute-0 sudo[55272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywjtswvrnytjgntbtmraduppprirbppq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597176.7905853-110-154541680624843/AnsiballZ_ini_file.py'
Dec 13 03:39:37 compute-0 sudo[55272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:37 compute-0 python3.9[55274]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:39:37 compute-0 sudo[55272]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:37 compute-0 sudo[55424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydlwevrcrjeyocpditktwefjdyrzmagk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597177.5189683-110-235905301772028/AnsiballZ_ini_file.py'
Dec 13 03:39:37 compute-0 sudo[55424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:37 compute-0 python3.9[55426]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:39:37 compute-0 sudo[55424]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:38 compute-0 sudo[55576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjgtjlexuezzjplgpjcuaspwhxlhjkci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597178.094575-110-2586902160032/AnsiballZ_ini_file.py'
Dec 13 03:39:38 compute-0 sudo[55576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:38 compute-0 python3.9[55578]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:39:38 compute-0 sudo[55576]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:38 compute-0 sudo[55728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekjyufrkhpidwtkotaiimenyzuleuenl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597178.6859653-110-254501509803345/AnsiballZ_ini_file.py'
Dec 13 03:39:38 compute-0 sudo[55728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:39 compute-0 python3.9[55730]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:39:39 compute-0 sudo[55728]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:39 compute-0 sudo[55880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnfoqjiyuhivhqcmzxrldrzimfcnrxfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597179.3640616-141-181893344116156/AnsiballZ_dnf.py'
Dec 13 03:39:39 compute-0 sudo[55880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:39 compute-0 python3.9[55882]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:39:41 compute-0 sudo[55880]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:41 compute-0 sudo[56033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljaubflzayafyjwajdnyfdtafcaqultq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597181.6196022-152-216226080522687/AnsiballZ_setup.py'
Dec 13 03:39:41 compute-0 sudo[56033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:42 compute-0 python3.9[56035]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:39:42 compute-0 sudo[56033]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:42 compute-0 sudo[56187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzgpiaomgqissjcfwxbdosbhnpqqlmkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597182.3524916-160-141778550052542/AnsiballZ_stat.py'
Dec 13 03:39:42 compute-0 sudo[56187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:42 compute-0 python3.9[56189]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:39:42 compute-0 sudo[56187]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:43 compute-0 sudo[56339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swabzkggkeaikrjtmhqzvncbnvfldizg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597182.99091-169-187235939837821/AnsiballZ_stat.py'
Dec 13 03:39:43 compute-0 sudo[56339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:43 compute-0 python3.9[56341]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:39:43 compute-0 sudo[56339]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:43 compute-0 sudo[56491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhxgtwzikquqmljyfcqtodrylsnyxrxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597183.7391076-179-236836682067858/AnsiballZ_command.py'
Dec 13 03:39:43 compute-0 sudo[56491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:44 compute-0 python3.9[56493]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:39:44 compute-0 sudo[56491]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:44 compute-0 sudo[56644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtyxpacuyhxkvyytqxjxitxkuqhemptc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597184.4268754-189-251193831850389/AnsiballZ_service_facts.py'
Dec 13 03:39:44 compute-0 sudo[56644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:45 compute-0 python3.9[56646]: ansible-service_facts Invoked
Dec 13 03:39:45 compute-0 network[56663]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:39:45 compute-0 network[56664]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:39:45 compute-0 network[56665]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:39:48 compute-0 sudo[56644]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:49 compute-0 sudo[56948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkjnsgqrylqjslmglgrhlqjfgcscfwh ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765597189.1552627-204-172745921246667/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765597189.1552627-204-172745921246667/args'
Dec 13 03:39:49 compute-0 sudo[56948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:49 compute-0 sudo[56948]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:50 compute-0 sudo[57115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvpouhuhgmudttmdbfdmcsghmkodbuoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597189.811734-215-270026045993913/AnsiballZ_dnf.py'
Dec 13 03:39:50 compute-0 sudo[57115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:50 compute-0 python3.9[57117]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:39:51 compute-0 sudo[57115]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:52 compute-0 sudo[57268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucgwyoolmpqfnsrobpafptrmquftbnxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597192.0845995-228-258137390816266/AnsiballZ_package_facts.py'
Dec 13 03:39:52 compute-0 sudo[57268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:52 compute-0 python3.9[57270]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 03:39:53 compute-0 sudo[57268]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:53 compute-0 sudo[57420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthbrjgdbopdwfxedpbsuibvdfjmrdtc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597193.5386913-238-119826479082504/AnsiballZ_stat.py'
Dec 13 03:39:53 compute-0 sudo[57420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:54 compute-0 python3.9[57422]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:54 compute-0 sudo[57420]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:54 compute-0 sudo[57545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yigggdxqplmipvqwvhjrjnyerzoxjamv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597193.5386913-238-119826479082504/AnsiballZ_copy.py'
Dec 13 03:39:54 compute-0 sudo[57545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:54 compute-0 python3.9[57547]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597193.5386913-238-119826479082504/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:54 compute-0 sudo[57545]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:55 compute-0 sudo[57699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igbtrroxxhznaxmnfiyfobcysipxselb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597194.8200324-253-22041752519568/AnsiballZ_stat.py'
Dec 13 03:39:55 compute-0 sudo[57699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:55 compute-0 python3.9[57701]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:39:55 compute-0 sudo[57699]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:55 compute-0 sudo[57824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibdozhmpagdwwsoqjunypqjgmlhoslik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597194.8200324-253-22041752519568/AnsiballZ_copy.py'
Dec 13 03:39:55 compute-0 sudo[57824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:55 compute-0 python3.9[57826]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597194.8200324-253-22041752519568/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:55 compute-0 sudo[57824]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:56 compute-0 sudo[57978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfvasouquqhkwfqvvewybwiruzthvsrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597196.3290806-274-173914268346331/AnsiballZ_lineinfile.py'
Dec 13 03:39:56 compute-0 sudo[57978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:56 compute-0 python3.9[57980]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:39:56 compute-0 sudo[57978]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:57 compute-0 sudo[58132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcvajcftghjmntqwkgtghewcmwrnqkgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597197.4683776-289-120750391997452/AnsiballZ_setup.py'
Dec 13 03:39:57 compute-0 sudo[58132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:58 compute-0 python3.9[58134]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:39:58 compute-0 sudo[58132]: pam_unix(sudo:session): session closed for user root
Dec 13 03:39:59 compute-0 sudo[58216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oowwrfsmiwwiddwmrrynzlbicshjfvop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597197.4683776-289-120750391997452/AnsiballZ_systemd.py'
Dec 13 03:39:59 compute-0 sudo[58216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:39:59 compute-0 python3.9[58218]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:39:59 compute-0 sudo[58216]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:00 compute-0 sudo[58370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwmkztfaiuefmxridzfwresyirbyempr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597200.0690465-305-104642807856907/AnsiballZ_setup.py'
Dec 13 03:40:00 compute-0 sudo[58370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:00 compute-0 python3.9[58372]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:40:00 compute-0 sudo[58370]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:01 compute-0 sudo[58454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etphkrnuprkvmcuflhotyvpoowhoypbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597200.0690465-305-104642807856907/AnsiballZ_systemd.py'
Dec 13 03:40:01 compute-0 sudo[58454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:01 compute-0 python3.9[58456]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:40:01 compute-0 systemd[1]: Stopping NTP client/server...
Dec 13 03:40:01 compute-0 chronyd[786]: chronyd exiting
Dec 13 03:40:01 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Dec 13 03:40:01 compute-0 systemd[1]: Stopped NTP client/server.
Dec 13 03:40:01 compute-0 systemd[1]: Starting NTP client/server...
Dec 13 03:40:01 compute-0 chronyd[58464]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec 13 03:40:01 compute-0 chronyd[58464]: Frequency -27.021 +/- 0.358 ppm read from /var/lib/chrony/drift
Dec 13 03:40:01 compute-0 chronyd[58464]: Loaded seccomp filter (level 2)
Dec 13 03:40:01 compute-0 systemd[1]: Started NTP client/server.
Dec 13 03:40:01 compute-0 sudo[58454]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:01 compute-0 sshd-session[53514]: Connection closed by 192.168.122.30 port 42826
Dec 13 03:40:01 compute-0 sshd-session[53511]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:40:01 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Dec 13 03:40:01 compute-0 systemd[1]: session-12.scope: Consumed 25.103s CPU time.
Dec 13 03:40:01 compute-0 systemd-logind[796]: Session 12 logged out. Waiting for processes to exit.
Dec 13 03:40:01 compute-0 systemd-logind[796]: Removed session 12.
Dec 13 03:40:07 compute-0 sshd-session[58490]: Accepted publickey for zuul from 192.168.122.30 port 54360 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:40:07 compute-0 systemd-logind[796]: New session 13 of user zuul.
Dec 13 03:40:07 compute-0 systemd[1]: Started Session 13 of User zuul.
Dec 13 03:40:07 compute-0 sshd-session[58490]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:40:08 compute-0 sudo[58643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuyntgxhqfigdhahyowcapwgeilenwny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597207.8496535-22-193182629252966/AnsiballZ_file.py'
Dec 13 03:40:08 compute-0 sudo[58643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:08 compute-0 python3.9[58645]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:08 compute-0 sudo[58643]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:09 compute-0 sudo[58795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmawpbpymrypcruqfxbgnxpvsoimpqmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597208.7696497-34-133125689245063/AnsiballZ_stat.py'
Dec 13 03:40:09 compute-0 sudo[58795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:09 compute-0 python3.9[58797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:09 compute-0 sudo[58795]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:09 compute-0 sudo[58918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdtjkhqkrjutlzcaoqxfpyyvsmgpqskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597208.7696497-34-133125689245063/AnsiballZ_copy.py'
Dec 13 03:40:09 compute-0 sudo[58918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:10 compute-0 python3.9[58920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597208.7696497-34-133125689245063/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:10 compute-0 sudo[58918]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:10 compute-0 sshd-session[58493]: Connection closed by 192.168.122.30 port 54360
Dec 13 03:40:10 compute-0 sshd-session[58490]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:40:10 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Dec 13 03:40:10 compute-0 systemd[1]: session-13.scope: Consumed 1.643s CPU time.
Dec 13 03:40:10 compute-0 systemd-logind[796]: Session 13 logged out. Waiting for processes to exit.
Dec 13 03:40:10 compute-0 systemd-logind[796]: Removed session 13.
Dec 13 03:40:15 compute-0 sshd-session[58945]: Accepted publickey for zuul from 192.168.122.30 port 56072 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:40:15 compute-0 systemd-logind[796]: New session 14 of user zuul.
Dec 13 03:40:15 compute-0 systemd[1]: Started Session 14 of User zuul.
Dec 13 03:40:15 compute-0 sshd-session[58945]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:40:16 compute-0 python3.9[59098]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:40:17 compute-0 sudo[59252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stlaykuzqbxyhfftiltbcrehkzkeikoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597217.3142416-33-96587340560322/AnsiballZ_file.py'
Dec 13 03:40:17 compute-0 sudo[59252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:17 compute-0 python3.9[59254]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:17 compute-0 sudo[59252]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:18 compute-0 sudo[59427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndrvcyjeqknisdonmymwccqdhcxrbmeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597218.10413-41-20514869985184/AnsiballZ_stat.py'
Dec 13 03:40:18 compute-0 sudo[59427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:18 compute-0 python3.9[59429]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:18 compute-0 sudo[59427]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:19 compute-0 sudo[59550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgdlhhjxdkudbsknplezxwolhbybnuhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597218.10413-41-20514869985184/AnsiballZ_copy.py'
Dec 13 03:40:19 compute-0 sudo[59550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:19 compute-0 python3.9[59552]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765597218.10413-41-20514869985184/.source.json _original_basename=.l6yd3qxl follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:19 compute-0 sudo[59550]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:20 compute-0 sudo[59702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaculpuxnhsxnspokxlnexojdkkdqnlc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597219.9171152-64-122587180924373/AnsiballZ_stat.py'
Dec 13 03:40:20 compute-0 sudo[59702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:20 compute-0 python3.9[59704]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:20 compute-0 sudo[59702]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:20 compute-0 sudo[59825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryzcsutchoqropixgatohmydecfbajom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597219.9171152-64-122587180924373/AnsiballZ_copy.py'
Dec 13 03:40:20 compute-0 sudo[59825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:20 compute-0 python3.9[59827]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597219.9171152-64-122587180924373/.source _original_basename=.k3h8svpm follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:20 compute-0 sudo[59825]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:21 compute-0 sudo[59977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccaftmqrxyuuididmpwxowoabxlirsdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597221.1536236-80-211127252512249/AnsiballZ_file.py'
Dec 13 03:40:21 compute-0 sudo[59977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:21 compute-0 python3.9[59979]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:40:21 compute-0 sudo[59977]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:22 compute-0 sudo[60129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swhccgrgiswbhyqpnntmeqxjonqfojlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597221.760064-88-160166114188993/AnsiballZ_stat.py'
Dec 13 03:40:22 compute-0 sudo[60129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:22 compute-0 python3.9[60131]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:22 compute-0 sudo[60129]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:22 compute-0 sudo[60252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eawjyaxcldatdbvpmazltgmuuvzdxezt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597221.760064-88-160166114188993/AnsiballZ_copy.py'
Dec 13 03:40:22 compute-0 sudo[60252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:22 compute-0 python3.9[60254]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597221.760064-88-160166114188993/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:40:22 compute-0 sudo[60252]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:23 compute-0 sudo[60404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owrlcojfceldsetvwessueipggzrgbcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597222.8624156-88-150345981716891/AnsiballZ_stat.py'
Dec 13 03:40:23 compute-0 sudo[60404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:23 compute-0 python3.9[60406]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:23 compute-0 sudo[60404]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:23 compute-0 sudo[60527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejwckzqofvjryolzxtanjipirgaildmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597222.8624156-88-150345981716891/AnsiballZ_copy.py'
Dec 13 03:40:23 compute-0 sudo[60527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:23 compute-0 python3.9[60529]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597222.8624156-88-150345981716891/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:40:23 compute-0 sudo[60527]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:24 compute-0 sudo[60679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpjajrdsxcytvqurypwjxakcedvnqkcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597223.9934063-117-47888434431816/AnsiballZ_file.py'
Dec 13 03:40:24 compute-0 sudo[60679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:24 compute-0 python3.9[60681]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:24 compute-0 sudo[60679]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:24 compute-0 sudo[60831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmibbidfwredxijwtmwzdplgovsdoxvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597224.5823724-125-3437553296905/AnsiballZ_stat.py'
Dec 13 03:40:24 compute-0 sudo[60831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:25 compute-0 python3.9[60833]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:25 compute-0 sudo[60831]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:25 compute-0 sudo[60954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juqprmxndrcngqbxxjvlsvsjeaoqqzvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597224.5823724-125-3437553296905/AnsiballZ_copy.py'
Dec 13 03:40:25 compute-0 sudo[60954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:25 compute-0 python3.9[60956]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597224.5823724-125-3437553296905/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:25 compute-0 sudo[60954]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:25 compute-0 sudo[61106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npjcgqazvhmhgmkhoyqtgdcvhiunxuuj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597225.7256436-140-247657064386482/AnsiballZ_stat.py'
Dec 13 03:40:25 compute-0 sudo[61106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:26 compute-0 python3.9[61108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:26 compute-0 sudo[61106]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:26 compute-0 sudo[61229]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roqpivpmiywijivymihtwrchyanzuhcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597225.7256436-140-247657064386482/AnsiballZ_copy.py'
Dec 13 03:40:26 compute-0 sudo[61229]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:26 compute-0 python3.9[61231]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597225.7256436-140-247657064386482/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:26 compute-0 sudo[61229]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:27 compute-0 sudo[61381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofsmjyosxlblnuabqtjbwlihwoxmunxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597226.8371775-155-181494096595713/AnsiballZ_systemd.py'
Dec 13 03:40:27 compute-0 sudo[61381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:27 compute-0 python3.9[61383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:40:27 compute-0 systemd[1]: Reloading.
Dec 13 03:40:27 compute-0 systemd-rc-local-generator[61410]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:27 compute-0 systemd-sysv-generator[61414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:27 compute-0 systemd[1]: Reloading.
Dec 13 03:40:28 compute-0 systemd-rc-local-generator[61446]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:28 compute-0 systemd-sysv-generator[61450]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:28 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Dec 13 03:40:28 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Dec 13 03:40:28 compute-0 sudo[61381]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:28 compute-0 sudo[61609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndutvitlsckrhlegjifrrmuqzsgyapwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597228.3319-163-276266007674890/AnsiballZ_stat.py'
Dec 13 03:40:28 compute-0 sudo[61609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:28 compute-0 python3.9[61611]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:28 compute-0 sudo[61609]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:29 compute-0 sudo[61732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxvbugetvjppgdrulnhuyfmhkyhxoqcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597228.3319-163-276266007674890/AnsiballZ_copy.py'
Dec 13 03:40:29 compute-0 sudo[61732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:29 compute-0 python3.9[61734]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597228.3319-163-276266007674890/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:29 compute-0 sudo[61732]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:29 compute-0 sudo[61884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fezmyybykjktzjkzbsnpzwoqvbeqgpax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597229.530533-178-140299455845711/AnsiballZ_stat.py'
Dec 13 03:40:29 compute-0 sudo[61884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:29 compute-0 python3.9[61886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:29 compute-0 sudo[61884]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:30 compute-0 sudo[62007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiknmbcnydgmzlzmxxgxkdzosaokqlpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597229.530533-178-140299455845711/AnsiballZ_copy.py'
Dec 13 03:40:30 compute-0 sudo[62007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:30 compute-0 python3.9[62009]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597229.530533-178-140299455845711/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:30 compute-0 sudo[62007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:30 compute-0 sudo[62159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadqfhhtbjpxafledjksknessdhyjgey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597230.6855464-193-191230562216646/AnsiballZ_systemd.py'
Dec 13 03:40:30 compute-0 sudo[62159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:31 compute-0 python3.9[62161]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:40:31 compute-0 systemd[1]: Reloading.
Dec 13 03:40:31 compute-0 systemd-sysv-generator[62191]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:31 compute-0 systemd-rc-local-generator[62188]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:31 compute-0 systemd[1]: Reloading.
Dec 13 03:40:31 compute-0 systemd-rc-local-generator[62225]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:31 compute-0 systemd-sysv-generator[62228]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:31 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 03:40:31 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 03:40:31 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 03:40:31 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 03:40:31 compute-0 sudo[62159]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:32 compute-0 python3.9[62386]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:40:32 compute-0 network[62403]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:40:32 compute-0 network[62404]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:40:32 compute-0 network[62405]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:40:35 compute-0 sudo[62666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqynwimcgoebdxojqebogkrlguewhmmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597235.1136565-209-79090842937080/AnsiballZ_systemd.py'
Dec 13 03:40:35 compute-0 sudo[62666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:35 compute-0 python3.9[62668]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:40:35 compute-0 systemd[1]: Reloading.
Dec 13 03:40:35 compute-0 systemd-rc-local-generator[62697]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:35 compute-0 systemd-sysv-generator[62701]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:35 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Dec 13 03:40:36 compute-0 iptables.init[62708]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec 13 03:40:36 compute-0 iptables.init[62708]: iptables: Flushing firewall rules: [  OK  ]
Dec 13 03:40:36 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Dec 13 03:40:36 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Dec 13 03:40:36 compute-0 sudo[62666]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:36 compute-0 sudo[62902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxeoiwxictiexrunzchkyxjhjlymyqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597236.5010865-209-19901036913132/AnsiballZ_systemd.py'
Dec 13 03:40:36 compute-0 sudo[62902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:37 compute-0 python3.9[62904]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:40:37 compute-0 sudo[62902]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:37 compute-0 sudo[63056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uduisvarjhswqxdtjehblwmylbpigsyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597237.3571806-225-12475430649634/AnsiballZ_systemd.py'
Dec 13 03:40:37 compute-0 sudo[63056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:37 compute-0 python3.9[63058]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:40:37 compute-0 systemd[1]: Reloading.
Dec 13 03:40:38 compute-0 systemd-rc-local-generator[63086]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:40:38 compute-0 systemd-sysv-generator[63089]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:40:38 compute-0 systemd[1]: Starting Netfilter Tables...
Dec 13 03:40:38 compute-0 systemd[1]: Finished Netfilter Tables.
Dec 13 03:40:38 compute-0 sudo[63056]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:38 compute-0 sudo[63248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtqqhpcrfjwtdzatrszapbsmqzzvicdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597238.425129-233-119876597045651/AnsiballZ_command.py'
Dec 13 03:40:38 compute-0 sudo[63248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:39 compute-0 python3.9[63250]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:40:39 compute-0 sudo[63248]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:39 compute-0 sudo[63401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vopluzngboclfgzarlgnmptbsauipivf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597239.4144955-247-210718429173304/AnsiballZ_stat.py'
Dec 13 03:40:39 compute-0 sudo[63401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:39 compute-0 python3.9[63403]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:39 compute-0 sudo[63401]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:40 compute-0 sudo[63526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xutndikyigrnxvqxhvlsqljggcpavktu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597239.4144955-247-210718429173304/AnsiballZ_copy.py'
Dec 13 03:40:40 compute-0 sudo[63526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:40 compute-0 python3.9[63528]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597239.4144955-247-210718429173304/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:40 compute-0 sudo[63526]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:40 compute-0 sudo[63679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eusvokfxphlmffnymvwqhimaztdaoais ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597240.6887596-262-29360898783664/AnsiballZ_systemd.py'
Dec 13 03:40:40 compute-0 sudo[63679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:41 compute-0 python3.9[63681]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:40:41 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Dec 13 03:40:41 compute-0 sshd[1005]: Received SIGHUP; restarting.
Dec 13 03:40:41 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Dec 13 03:40:41 compute-0 sshd[1005]: Server listening on 0.0.0.0 port 22.
Dec 13 03:40:41 compute-0 sshd[1005]: Server listening on :: port 22.
Dec 13 03:40:41 compute-0 sudo[63679]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:41 compute-0 sudo[63835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afemtemramgctencbnkdhbvyalbxzcma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597241.4726503-270-198416840671880/AnsiballZ_file.py'
Dec 13 03:40:41 compute-0 sudo[63835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:41 compute-0 python3.9[63837]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:41 compute-0 sudo[63835]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:42 compute-0 sudo[63987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdqptbtocekwcatoerzanygvzclpafj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597242.106664-278-278268032442262/AnsiballZ_stat.py'
Dec 13 03:40:42 compute-0 sudo[63987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:42 compute-0 python3.9[63989]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:42 compute-0 sudo[63987]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:42 compute-0 sudo[64110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlgtqpdccadrrrqkqfsbupdnikaewfbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597242.106664-278-278268032442262/AnsiballZ_copy.py'
Dec 13 03:40:42 compute-0 sudo[64110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:43 compute-0 python3.9[64112]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597242.106664-278-278268032442262/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:43 compute-0 sudo[64110]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:43 compute-0 sudo[64262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjjflwxdlmkxjyzgsbtodlaewrfroczf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597243.2962985-296-64537390071238/AnsiballZ_timezone.py'
Dec 13 03:40:43 compute-0 sudo[64262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:43 compute-0 python3.9[64264]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 03:40:43 compute-0 systemd[1]: Starting Time & Date Service...
Dec 13 03:40:44 compute-0 systemd[1]: Started Time & Date Service.
Dec 13 03:40:44 compute-0 sudo[64262]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:44 compute-0 sudo[64418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjonypzubsfxrutmgddfiivlvclfuxpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597244.3373883-305-23200257719548/AnsiballZ_file.py'
Dec 13 03:40:44 compute-0 sudo[64418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:44 compute-0 python3.9[64420]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:44 compute-0 sudo[64418]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:45 compute-0 sudo[64570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgobriuyqyfmtmdpoxbpfzjugufkcdep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597245.0147288-313-52399815890985/AnsiballZ_stat.py'
Dec 13 03:40:45 compute-0 sudo[64570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:45 compute-0 python3.9[64572]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:45 compute-0 sudo[64570]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:45 compute-0 sudo[64693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvgzkeyihiymdkctgjlgristnihtflgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597245.0147288-313-52399815890985/AnsiballZ_copy.py'
Dec 13 03:40:45 compute-0 sudo[64693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:45 compute-0 python3.9[64695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597245.0147288-313-52399815890985/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:46 compute-0 sudo[64693]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:46 compute-0 sudo[64845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivqeyhzwoqkfugkfllkrpufleyzgeqov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597246.1618452-328-28744603711046/AnsiballZ_stat.py'
Dec 13 03:40:46 compute-0 sudo[64845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:46 compute-0 python3.9[64847]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:46 compute-0 sudo[64845]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:46 compute-0 sudo[64968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oibtmcodmbmxhiuwjovamaeazpqwpiiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597246.1618452-328-28744603711046/AnsiballZ_copy.py'
Dec 13 03:40:46 compute-0 sudo[64968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:47 compute-0 python3.9[64970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597246.1618452-328-28744603711046/.source.yaml _original_basename=.9jr1v5u8 follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:47 compute-0 sudo[64968]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:47 compute-0 sudo[65120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndwfoielfeumthdbnurmljfftbpcywxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597247.2500343-343-791191054164/AnsiballZ_stat.py'
Dec 13 03:40:47 compute-0 sudo[65120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:47 compute-0 python3.9[65122]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:47 compute-0 sudo[65120]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:47 compute-0 sudo[65243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwtwxmosntqhjncgjugjjdtnifqfffzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597247.2500343-343-791191054164/AnsiballZ_copy.py'
Dec 13 03:40:47 compute-0 sudo[65243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:48 compute-0 python3.9[65245]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597247.2500343-343-791191054164/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:48 compute-0 sudo[65243]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:48 compute-0 sudo[65395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbishgdzupefwiydtodiyfpcftybtljl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597248.3582532-358-23953942075915/AnsiballZ_command.py'
Dec 13 03:40:48 compute-0 sudo[65395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:48 compute-0 python3.9[65397]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:40:48 compute-0 sudo[65395]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:49 compute-0 sudo[65548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnrzjsmmuijtuqrzcltxyfqravxocqfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597249.1418495-366-186850033808610/AnsiballZ_command.py'
Dec 13 03:40:49 compute-0 sudo[65548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:49 compute-0 python3.9[65550]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:40:49 compute-0 sudo[65548]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:50 compute-0 sudo[65701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sryibkjnapvnavyxlprbkhvczutefxrk ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597249.787393-374-134673551111042/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 03:40:50 compute-0 sudo[65701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:50 compute-0 python3[65703]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 03:40:50 compute-0 sudo[65701]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:50 compute-0 sudo[65853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmcvttziuhukxyflbsbvyzhmmyecxucd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597250.6153195-382-63500620491129/AnsiballZ_stat.py'
Dec 13 03:40:50 compute-0 sudo[65853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:51 compute-0 python3.9[65855]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:51 compute-0 sudo[65853]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:51 compute-0 sudo[65976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yldkefvvtlpqsilmvkoanbyigarvyhga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597250.6153195-382-63500620491129/AnsiballZ_copy.py'
Dec 13 03:40:51 compute-0 sudo[65976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:51 compute-0 python3.9[65978]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597250.6153195-382-63500620491129/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:51 compute-0 sudo[65976]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:52 compute-0 sudo[66128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzfkqfjfhjllxgqaejytvzukuowzfnhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597251.7772985-397-262508687441611/AnsiballZ_stat.py'
Dec 13 03:40:52 compute-0 sudo[66128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:52 compute-0 python3.9[66130]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:52 compute-0 sudo[66128]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:52 compute-0 sudo[66251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbnwurjjiyhpcfbxxysmehanotvselly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597251.7772985-397-262508687441611/AnsiballZ_copy.py'
Dec 13 03:40:52 compute-0 sudo[66251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:52 compute-0 python3.9[66253]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597251.7772985-397-262508687441611/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:52 compute-0 sudo[66251]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:53 compute-0 sudo[66403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdywfrfamiggpflryvzqqkpfrrdwrysx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597253.0395477-412-232076304700025/AnsiballZ_stat.py'
Dec 13 03:40:53 compute-0 sudo[66403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:53 compute-0 python3.9[66405]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:53 compute-0 sudo[66403]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:53 compute-0 sudo[66526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvxljyruklbvwiyebjpfconygzqwuzev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597253.0395477-412-232076304700025/AnsiballZ_copy.py'
Dec 13 03:40:53 compute-0 sudo[66526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:54 compute-0 python3.9[66528]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597253.0395477-412-232076304700025/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:54 compute-0 sudo[66526]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:54 compute-0 sudo[66678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oufmbmluvumkcpthulrzqoaxpiozlyry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597254.1857789-427-196096284308819/AnsiballZ_stat.py'
Dec 13 03:40:54 compute-0 sudo[66678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:54 compute-0 python3.9[66680]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:54 compute-0 sudo[66678]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:55 compute-0 sudo[66801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrdfijegpmwczwvzjgiqkfjzbkurihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597254.1857789-427-196096284308819/AnsiballZ_copy.py'
Dec 13 03:40:55 compute-0 sudo[66801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:55 compute-0 python3.9[66803]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597254.1857789-427-196096284308819/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:55 compute-0 sudo[66801]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:55 compute-0 sudo[66953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyugbrtfyfiqfjotwpysnrsimynxoayn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597255.3647094-442-91938266165456/AnsiballZ_stat.py'
Dec 13 03:40:55 compute-0 sudo[66953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:55 compute-0 python3.9[66955]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:40:55 compute-0 sudo[66953]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:56 compute-0 sudo[67076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpfgnjnprxksromaqrhjnwabyugnqgsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597255.3647094-442-91938266165456/AnsiballZ_copy.py'
Dec 13 03:40:56 compute-0 sudo[67076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:56 compute-0 python3.9[67078]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597255.3647094-442-91938266165456/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:56 compute-0 sudo[67076]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:56 compute-0 sudo[67228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skdzpzarraoatktbjkbrmsuxfltdiycd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597256.6133401-457-50342600887550/AnsiballZ_file.py'
Dec 13 03:40:56 compute-0 sudo[67228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:57 compute-0 python3.9[67230]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:57 compute-0 sudo[67228]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:57 compute-0 sudo[67380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onyyvuiyrqcjljgaiumcwvyfnrprjaot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597257.230533-465-168006323143302/AnsiballZ_command.py'
Dec 13 03:40:57 compute-0 sudo[67380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:57 compute-0 python3.9[67382]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:40:57 compute-0 sudo[67380]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:58 compute-0 sudo[67539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewighgsofashojyfiuakxvvnwyygcoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597257.8640974-473-118832046627303/AnsiballZ_blockinfile.py'
Dec 13 03:40:58 compute-0 sudo[67539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:58 compute-0 python3.9[67541]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:58 compute-0 sudo[67539]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:59 compute-0 sudo[67692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpsotiqffvskklgabyljedpxsmdcixmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597258.765128-482-123382445083769/AnsiballZ_file.py'
Dec 13 03:40:59 compute-0 sudo[67692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:59 compute-0 python3.9[67694]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:59 compute-0 sudo[67692]: pam_unix(sudo:session): session closed for user root
Dec 13 03:40:59 compute-0 sudo[67844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsmigurjydgkmyhlwihbiaoqrksnurrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597259.4482472-482-76636980824540/AnsiballZ_file.py'
Dec 13 03:40:59 compute-0 sudo[67844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:40:59 compute-0 python3.9[67846]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:40:59 compute-0 sudo[67844]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:00 compute-0 sudo[67996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olnyaugbkwxgmgmwnrpupmhynkodlzft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597260.0594044-497-177721884717580/AnsiballZ_mount.py'
Dec 13 03:41:00 compute-0 sudo[67996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:00 compute-0 python3.9[67998]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 03:41:00 compute-0 sudo[67996]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:01 compute-0 sudo[68149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exbdxqlsvoasdjtiwxgtsdjqhgsiqrtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597260.9246342-497-168183836714160/AnsiballZ_mount.py'
Dec 13 03:41:01 compute-0 sudo[68149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:01 compute-0 python3.9[68151]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 03:41:01 compute-0 sudo[68149]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:01 compute-0 sshd-session[58948]: Connection closed by 192.168.122.30 port 56072
Dec 13 03:41:01 compute-0 sshd-session[58945]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:41:01 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Dec 13 03:41:01 compute-0 systemd[1]: session-14.scope: Consumed 33.883s CPU time.
Dec 13 03:41:01 compute-0 systemd-logind[796]: Session 14 logged out. Waiting for processes to exit.
Dec 13 03:41:01 compute-0 systemd-logind[796]: Removed session 14.
Dec 13 03:41:07 compute-0 sshd-session[68177]: Accepted publickey for zuul from 192.168.122.30 port 33452 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:41:07 compute-0 systemd-logind[796]: New session 15 of user zuul.
Dec 13 03:41:07 compute-0 systemd[1]: Started Session 15 of User zuul.
Dec 13 03:41:07 compute-0 sshd-session[68177]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:41:08 compute-0 sudo[68330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkdcnupbbuqagdkcghzrburpbhltbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597267.6023533-16-192137061769997/AnsiballZ_tempfile.py'
Dec 13 03:41:08 compute-0 sudo[68330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:08 compute-0 python3.9[68332]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 03:41:08 compute-0 sudo[68330]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:08 compute-0 sudo[68482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfgdxjchtvqknmmtxihxdzxxmwwkkkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597268.3799746-28-146179179712668/AnsiballZ_stat.py'
Dec 13 03:41:08 compute-0 sudo[68482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:08 compute-0 python3.9[68484]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:41:08 compute-0 sudo[68482]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:09 compute-0 sudo[68634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tijfdsfktzzrgrpjhmgqnqgroznppxpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597269.136694-38-63366274845385/AnsiballZ_setup.py'
Dec 13 03:41:09 compute-0 sudo[68634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:09 compute-0 python3.9[68636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:41:10 compute-0 sudo[68634]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:10 compute-0 sudo[68786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgxbuojfmlzpzfilqrcykfbkkqieqtvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597270.1997983-47-68356184079398/AnsiballZ_blockinfile.py'
Dec 13 03:41:10 compute-0 sudo[68786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:10 compute-0 python3.9[68788]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM/+zIL2NxrneOl4Ijgq1+mjiuRtRxLtOkkDku84YD4MtT5b3Hv5A+8RLBrh4F6PSOlFLMaMJbg010cXPCia2cNL3B+w2lb9EpFUPmqo6cO/RdnYjC0YotqgtrcZGAIyZObG86oA2gAEQ+edZkVVp/nvGm3bPzxWvDHGwFNwtWysCsVfc2u/Ao1VjOGyOXGP450w5o4x9hvpuD6vd1RGLXZsEAB9iaxHFgK4lCHChwRWO6VEE55cKPWu5YjR/N/dJvYWLfVbSoC5PtGtR1wjnY+aO6DlyGCH8jqTzF3h40fxxrAu+sfgylKKH0sOvkugKQzuldVD/Q3mL4XQyLi/EhlvyMHqUT2xr0aiVQRWb6McFHFWo1ruUymYJPXl48xm7xCXHtjWajMoO0g0gRIFHeHRjmYQs/itOmfNlBOvZiYo9XTT40rvjzmvQvVUCRJ8Fq+YSWjD7kq9XHwwloPIltStmIYYpicOD3OVaQpChBpQGX6aBm4CkxT9r0ayHsQFU=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGd4yz03E7KSx67rSJ86GvOZAiazoRraK5NP1md10Q9d
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/gGRW3gLjCVSzpBzXp92wVVIBeqLmRu0H1xxYCUcL6WRbi/C7ipdRUo9/dUYAhMEzG1NJxKRcw2OgECOr1/mc=
                                             create=True mode=0644 path=/tmp/ansible.hjtz2is3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:41:10 compute-0 sudo[68786]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:11 compute-0 sudo[68938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huqoyrhsifvnbmbwvidhthvwgpkhlxaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597270.9423-55-8104996528681/AnsiballZ_command.py'
Dec 13 03:41:11 compute-0 sudo[68938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:11 compute-0 python3.9[68940]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.hjtz2is3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:11 compute-0 sudo[68938]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:12 compute-0 sudo[69092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urdhelbhdoxymgfdpckdttubtxmldzad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597271.759639-63-234849508414031/AnsiballZ_file.py'
Dec 13 03:41:12 compute-0 sudo[69092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:12 compute-0 python3.9[69094]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.hjtz2is3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:41:12 compute-0 sudo[69092]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:12 compute-0 sshd-session[68180]: Connection closed by 192.168.122.30 port 33452
Dec 13 03:41:12 compute-0 sshd-session[68177]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:41:12 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Dec 13 03:41:12 compute-0 systemd[1]: session-15.scope: Consumed 3.110s CPU time.
Dec 13 03:41:12 compute-0 systemd-logind[796]: Session 15 logged out. Waiting for processes to exit.
Dec 13 03:41:12 compute-0 systemd-logind[796]: Removed session 15.
Dec 13 03:41:14 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 03:41:19 compute-0 sshd-session[69122]: Accepted publickey for zuul from 192.168.122.30 port 53940 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:41:19 compute-0 systemd-logind[796]: New session 16 of user zuul.
Dec 13 03:41:19 compute-0 systemd[1]: Started Session 16 of User zuul.
Dec 13 03:41:19 compute-0 sshd-session[69122]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:41:20 compute-0 python3.9[69275]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:41:21 compute-0 sudo[69429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afljgsokwvuwtinfgomjsixufxhnzitb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597280.5220888-32-57383006632104/AnsiballZ_systemd.py'
Dec 13 03:41:21 compute-0 sudo[69429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:21 compute-0 python3.9[69431]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 03:41:21 compute-0 sudo[69429]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:21 compute-0 sudo[69583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buujekqcfhzvkmeootqynfdyrdvsaqcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597281.636327-40-85762942042788/AnsiballZ_systemd.py'
Dec 13 03:41:21 compute-0 sudo[69583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:22 compute-0 python3.9[69585]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:41:22 compute-0 sudo[69583]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:22 compute-0 sudo[69736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odhpupseqrcwjxqyalxnkpjskhxkhstf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597282.502343-49-216894344064139/AnsiballZ_command.py'
Dec 13 03:41:22 compute-0 sudo[69736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:23 compute-0 python3.9[69738]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:23 compute-0 sudo[69736]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:23 compute-0 sudo[69889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpettlkcyxzhmdekqshhpuxradsuuhho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597283.3059905-57-25787443616942/AnsiballZ_stat.py'
Dec 13 03:41:23 compute-0 sudo[69889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:23 compute-0 python3.9[69891]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:41:23 compute-0 sudo[69889]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:24 compute-0 sudo[70043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnxufrcqbwevrtbziyfakxenojdkpzmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597284.0511591-65-231912888721609/AnsiballZ_command.py'
Dec 13 03:41:24 compute-0 sudo[70043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:24 compute-0 python3.9[70045]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:24 compute-0 sudo[70043]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:25 compute-0 sudo[70198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgiaeijedtahpwzpsswjzemfcoioeiln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597284.7045083-73-199383448012645/AnsiballZ_file.py'
Dec 13 03:41:25 compute-0 sudo[70198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:25 compute-0 python3.9[70200]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:41:25 compute-0 sudo[70198]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:25 compute-0 sshd-session[69125]: Connection closed by 192.168.122.30 port 53940
Dec 13 03:41:25 compute-0 sshd-session[69122]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:41:25 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Dec 13 03:41:25 compute-0 systemd[1]: session-16.scope: Consumed 4.403s CPU time.
Dec 13 03:41:25 compute-0 systemd-logind[796]: Session 16 logged out. Waiting for processes to exit.
Dec 13 03:41:25 compute-0 systemd-logind[796]: Removed session 16.
Dec 13 03:41:31 compute-0 sshd-session[70225]: Accepted publickey for zuul from 192.168.122.30 port 58798 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:41:31 compute-0 systemd-logind[796]: New session 17 of user zuul.
Dec 13 03:41:31 compute-0 systemd[1]: Started Session 17 of User zuul.
Dec 13 03:41:31 compute-0 sshd-session[70225]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:41:32 compute-0 python3.9[70378]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:41:32 compute-0 sudo[70532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezepjknetehyfqdmkpgmpghmihkkrpxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597292.5320542-34-122989219954990/AnsiballZ_setup.py'
Dec 13 03:41:32 compute-0 sudo[70532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:33 compute-0 python3.9[70534]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:41:33 compute-0 sudo[70532]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:33 compute-0 sudo[70616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdxigdzrozwcdxmxihodfvwwfgcswbli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597292.5320542-34-122989219954990/AnsiballZ_dnf.py'
Dec 13 03:41:33 compute-0 sudo[70616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:34 compute-0 python3.9[70618]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 03:41:35 compute-0 sudo[70616]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:36 compute-0 python3.9[70769]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:37 compute-0 python3.9[70920]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 03:41:38 compute-0 python3.9[71070]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:41:38 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:41:39 compute-0 python3.9[71221]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:41:39 compute-0 sshd-session[70228]: Connection closed by 192.168.122.30 port 58798
Dec 13 03:41:39 compute-0 sshd-session[70225]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:41:39 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Dec 13 03:41:39 compute-0 systemd[1]: session-17.scope: Consumed 6.122s CPU time.
Dec 13 03:41:39 compute-0 systemd-logind[796]: Session 17 logged out. Waiting for processes to exit.
Dec 13 03:41:39 compute-0 systemd-logind[796]: Removed session 17.
Dec 13 03:41:47 compute-0 sshd-session[71246]: Accepted publickey for zuul from 38.102.83.147 port 42276 ssh2: RSA SHA256:MGZVQgYn9gYz1wn3TSQIkaBtr9N7EQQQSyZTc8CRvWU
Dec 13 03:41:47 compute-0 systemd-logind[796]: New session 18 of user zuul.
Dec 13 03:41:47 compute-0 systemd[1]: Started Session 18 of User zuul.
Dec 13 03:41:47 compute-0 sshd-session[71246]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:41:47 compute-0 sudo[71322]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ritwcboqqvqdfkfggogeonalvsafyein ; /usr/bin/python3'
Dec 13 03:41:47 compute-0 sudo[71322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:48 compute-0 useradd[71326]: new group: name=ceph-admin, GID=42478
Dec 13 03:41:48 compute-0 useradd[71326]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Dec 13 03:41:48 compute-0 sudo[71322]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:48 compute-0 sudo[71408]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaanhpssuirtgyigggpfcvzaucdcirlw ; /usr/bin/python3'
Dec 13 03:41:48 compute-0 sudo[71408]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:48 compute-0 sudo[71408]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:48 compute-0 sudo[71481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptvllzjixpfmdpthudzhqsbwvbteatxp ; /usr/bin/python3'
Dec 13 03:41:48 compute-0 sudo[71481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:49 compute-0 sudo[71481]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:49 compute-0 sudo[71531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbcdjqxmyccxlednjbyiytyiomprrwyo ; /usr/bin/python3'
Dec 13 03:41:49 compute-0 sudo[71531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:49 compute-0 sudo[71531]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:49 compute-0 sudo[71557]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hargyzrwqbajxlvnnglzloikjotpjujy ; /usr/bin/python3'
Dec 13 03:41:49 compute-0 sudo[71557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:49 compute-0 sudo[71557]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:50 compute-0 sudo[71583]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovddfclmahzszimqklnwcjlorwyilkzg ; /usr/bin/python3'
Dec 13 03:41:50 compute-0 sudo[71583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:50 compute-0 sudo[71583]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:50 compute-0 sudo[71609]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdmycfyzpnhhfuajcvcrrofinrjsjiol ; /usr/bin/python3'
Dec 13 03:41:50 compute-0 sudo[71609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:50 compute-0 sudo[71609]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:51 compute-0 sudo[71687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apyxspjomiougxngydejmllgooanxmtg ; /usr/bin/python3'
Dec 13 03:41:51 compute-0 sudo[71687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:51 compute-0 sudo[71687]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:51 compute-0 sudo[71760]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvryyixasvwmctvdszmwnbidbbfcvqro ; /usr/bin/python3'
Dec 13 03:41:51 compute-0 sudo[71760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:51 compute-0 sudo[71760]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:51 compute-0 sudo[71862]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuobnnhnozckazeibfkqvcvdcvveqck ; /usr/bin/python3'
Dec 13 03:41:51 compute-0 sudo[71862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:51 compute-0 sudo[71862]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:52 compute-0 sudo[71935]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpjekmyjydrktcfxpozjfshzmhmsbnes ; /usr/bin/python3'
Dec 13 03:41:52 compute-0 sudo[71935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:52 compute-0 sudo[71935]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:52 compute-0 sudo[71985]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utklffxbuqavoxvlcuzzzqxgrkigzqvk ; /usr/bin/python3'
Dec 13 03:41:52 compute-0 sudo[71985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:53 compute-0 python3[71987]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:41:53 compute-0 sudo[71985]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:54 compute-0 sudo[72080]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhfzrsplgazuqtgyrjuzfmgyysmvtdss ; /usr/bin/python3'
Dec 13 03:41:54 compute-0 sudo[72080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:54 compute-0 python3[72082]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:41:55 compute-0 sudo[72080]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:56 compute-0 sudo[72107]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwuunpktqvrnpamfafotdyfomjmqzvk ; /usr/bin/python3'
Dec 13 03:41:56 compute-0 sudo[72107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:56 compute-0 python3[72109]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:41:56 compute-0 sudo[72107]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:56 compute-0 sudo[72133]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iflkmdbqxxxjgxvdgfnjxkjmkzgryebp ; /usr/bin/python3'
Dec 13 03:41:56 compute-0 sudo[72133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:56 compute-0 python3[72135]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:56 compute-0 kernel: loop: module loaded
Dec 13 03:41:56 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Dec 13 03:41:56 compute-0 sudo[72133]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:56 compute-0 sudo[72168]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoortmxuktjcrxatzntnoyrcgjpalive ; /usr/bin/python3'
Dec 13 03:41:56 compute-0 sudo[72168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:56 compute-0 python3[72170]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:41:56 compute-0 lvm[72173]: PV /dev/loop3 not used.
Dec 13 03:41:56 compute-0 lvm[72182]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:41:57 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Dec 13 03:41:57 compute-0 sudo[72168]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:57 compute-0 lvm[72184]:   1 logical volume(s) in volume group "ceph_vg0" now active
Dec 13 03:41:57 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Dec 13 03:41:57 compute-0 sudo[72260]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcnfoxydpkmawaiwhefrfusfgajicfs ; /usr/bin/python3'
Dec 13 03:41:57 compute-0 sudo[72260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:57 compute-0 python3[72262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:41:57 compute-0 sudo[72260]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:57 compute-0 sudo[72333]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtjzhykdvlbbtbylyreyghrgdbhtojzp ; /usr/bin/python3'
Dec 13 03:41:57 compute-0 sudo[72333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:58 compute-0 python3[72335]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597317.179061-36106-156763030749834/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:41:58 compute-0 sudo[72333]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:58 compute-0 sudo[72383]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vngjejozsgwksvikehjcdkifljprxpmk ; /usr/bin/python3'
Dec 13 03:41:58 compute-0 sudo[72383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:58 compute-0 python3[72385]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:41:58 compute-0 systemd[1]: Reloading.
Dec 13 03:41:59 compute-0 systemd-rc-local-generator[72408]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:41:59 compute-0 systemd-sysv-generator[72416]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:41:59 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 03:41:59 compute-0 bash[72424]: /dev/loop3: [64513]:4327948 (/var/lib/ceph-osd-0.img)
Dec 13 03:41:59 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 03:41:59 compute-0 sudo[72383]: pam_unix(sudo:session): session closed for user root
Dec 13 03:41:59 compute-0 lvm[72425]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:41:59 compute-0 lvm[72425]: VG ceph_vg0 finished
Dec 13 03:41:59 compute-0 sudo[72449]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkjuelsktxjhfblmwamunxsoxyaewhsk ; /usr/bin/python3'
Dec 13 03:41:59 compute-0 sudo[72449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:41:59 compute-0 python3[72451]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:42:01 compute-0 sudo[72449]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:01 compute-0 sudo[72477]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uybvwdpxiidzqocosedvacdavswctgna ; /usr/bin/python3'
Dec 13 03:42:01 compute-0 sudo[72477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:01 compute-0 python3[72479]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:01 compute-0 sudo[72477]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:01 compute-0 sudo[72503]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ueadgpwdafzinbdlwsolqdgdnhpmxkrx ; /usr/bin/python3'
Dec 13 03:42:01 compute-0 sudo[72503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:01 compute-0 python3[72505]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:01 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Dec 13 03:42:01 compute-0 sudo[72503]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:02 compute-0 sudo[72535]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjdesvgcyrgrfsfzsvjslelcfwpduhgt ; /usr/bin/python3'
Dec 13 03:42:02 compute-0 sudo[72535]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:02 compute-0 python3[72537]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:02 compute-0 lvm[72540]: PV /dev/loop4 not used.
Dec 13 03:42:02 compute-0 lvm[72550]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:42:02 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Dec 13 03:42:02 compute-0 sudo[72535]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:02 compute-0 lvm[72552]:   1 logical volume(s) in volume group "ceph_vg1" now active
Dec 13 03:42:02 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Dec 13 03:42:02 compute-0 sudo[72628]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpxbujivcwmcyuwzdhysghfqujlgtmgx ; /usr/bin/python3'
Dec 13 03:42:02 compute-0 sudo[72628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:02 compute-0 python3[72630]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:42:02 compute-0 sudo[72628]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:03 compute-0 sudo[72701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpdcruwehxtfnfymrhptzsziugpjnsld ; /usr/bin/python3'
Dec 13 03:42:03 compute-0 sudo[72701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:03 compute-0 python3[72703]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597322.5287004-36133-57130792700844/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:03 compute-0 sudo[72701]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:03 compute-0 sudo[72751]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvbiqklswsbdycdxudwkwohfdsrupwnr ; /usr/bin/python3'
Dec 13 03:42:03 compute-0 sudo[72751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:03 compute-0 python3[72753]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:42:03 compute-0 systemd[1]: Reloading.
Dec 13 03:42:03 compute-0 systemd-sysv-generator[72783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:42:03 compute-0 systemd-rc-local-generator[72777]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:42:03 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 03:42:03 compute-0 bash[72792]: /dev/loop4: [64513]:4327964 (/var/lib/ceph-osd-1.img)
Dec 13 03:42:03 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 03:42:03 compute-0 lvm[72793]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:42:03 compute-0 lvm[72793]: VG ceph_vg1 finished
Dec 13 03:42:03 compute-0 sudo[72751]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:04 compute-0 sudo[72817]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deixwjpqcsqwprgqlvylgvsyeusfafsj ; /usr/bin/python3'
Dec 13 03:42:04 compute-0 sudo[72817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:04 compute-0 python3[72819]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:42:05 compute-0 sudo[72817]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:05 compute-0 sudo[72844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhgmdaycjllepeieiwvmkahvoyiclduq ; /usr/bin/python3'
Dec 13 03:42:05 compute-0 sudo[72844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:06 compute-0 python3[72846]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:06 compute-0 sudo[72844]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:06 compute-0 sudo[72870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzwlteyiafhavcrakngxqpimwheeppox ; /usr/bin/python3'
Dec 13 03:42:06 compute-0 sudo[72870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:06 compute-0 python3[72872]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:06 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Dec 13 03:42:06 compute-0 sudo[72870]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:06 compute-0 sudo[72902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-galksnxexqieykhrzabktjlkjeyxfnqs ; /usr/bin/python3'
Dec 13 03:42:06 compute-0 sudo[72902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:06 compute-0 python3[72904]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:06 compute-0 lvm[72907]: PV /dev/loop5 not used.
Dec 13 03:42:06 compute-0 lvm[72917]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:42:06 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Dec 13 03:42:06 compute-0 sudo[72902]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:06 compute-0 lvm[72919]:   1 logical volume(s) in volume group "ceph_vg2" now active
Dec 13 03:42:07 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Dec 13 03:42:07 compute-0 sudo[72995]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkqhvbewjvdycgnrijfrkuwvzkfekuxb ; /usr/bin/python3'
Dec 13 03:42:07 compute-0 sudo[72995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:07 compute-0 python3[72997]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:42:07 compute-0 sudo[72995]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:07 compute-0 sudo[73068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crfxfxrscspkprnbvmzjjnxtmzesjiue ; /usr/bin/python3'
Dec 13 03:42:07 compute-0 sudo[73068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:07 compute-0 python3[73070]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597327.1168365-36160-269857771826556/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:07 compute-0 sudo[73068]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:08 compute-0 sudo[73118]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpemwcacmrtcakrrrlxqgppsvvjziqlz ; /usr/bin/python3'
Dec 13 03:42:08 compute-0 sudo[73118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:08 compute-0 python3[73120]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:42:08 compute-0 systemd[1]: Reloading.
Dec 13 03:42:08 compute-0 systemd-rc-local-generator[73149]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:42:08 compute-0 systemd-sysv-generator[73153]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:42:08 compute-0 systemd[1]: Starting Ceph OSD losetup...
Dec 13 03:42:08 compute-0 bash[73161]: /dev/loop5: [64513]:4327966 (/var/lib/ceph-osd-2.img)
Dec 13 03:42:08 compute-0 systemd[1]: Finished Ceph OSD losetup.
Dec 13 03:42:08 compute-0 lvm[73162]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:42:08 compute-0 lvm[73162]: VG ceph_vg2 finished
Dec 13 03:42:08 compute-0 sudo[73118]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:10 compute-0 python3[73186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:42:11 compute-0 chronyd[58464]: Selected source 162.159.200.1 (pool.ntp.org)
Dec 13 03:42:12 compute-0 sudo[73277]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmiywzejnrdxxesdftdgeaptkcgnyaln ; /usr/bin/python3'
Dec 13 03:42:12 compute-0 sudo[73277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:12 compute-0 python3[73279]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:42:17 compute-0 sudo[73277]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:17 compute-0 sudo[73334]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoiyksrpyiqimnhqbppidvatxxgweafz ; /usr/bin/python3'
Dec 13 03:42:17 compute-0 sudo[73334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:18 compute-0 python3[73336]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec 13 03:42:21 compute-0 groupadd[73346]: group added to /etc/group: name=cephadm, GID=992
Dec 13 03:42:21 compute-0 groupadd[73346]: group added to /etc/gshadow: name=cephadm
Dec 13 03:42:21 compute-0 groupadd[73346]: new group: name=cephadm, GID=992
Dec 13 03:42:21 compute-0 useradd[73353]: new user: name=cephadm, UID=992, GID=992, home=/var/lib/cephadm, shell=/bin/bash, from=none
Dec 13 03:42:21 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:42:21 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:42:22 compute-0 sudo[73334]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:22 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:42:22 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:42:22 compute-0 systemd[1]: run-r39e78e63607049ff8c2d2a7ff7aa6c6a.service: Deactivated successfully.
Dec 13 03:42:22 compute-0 sudo[73453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfjvradqcsvlezlzyivryffqziwkaecf ; /usr/bin/python3'
Dec 13 03:42:22 compute-0 sudo[73453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:22 compute-0 python3[73455]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:22 compute-0 sudo[73453]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:22 compute-0 sudo[73481]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urwysdnavqieoaygocjwmohlpobinaze ; /usr/bin/python3'
Dec 13 03:42:22 compute-0 sudo[73481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:22 compute-0 python3[73483]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:23 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:42:23 compute-0 sudo[73481]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:23 compute-0 sudo[73519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmkjqqryrlbeinwntdvwyuwgsfcwkdxl ; /usr/bin/python3'
Dec 13 03:42:23 compute-0 sudo[73519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:23 compute-0 python3[73521]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:23 compute-0 sudo[73519]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:23 compute-0 sudo[73545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idksighsjfjpreycemhtdwnkcwspasub ; /usr/bin/python3'
Dec 13 03:42:23 compute-0 sudo[73545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:23 compute-0 python3[73547]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:23 compute-0 sudo[73545]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:24 compute-0 sudo[73623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuevdleszxrsqgmmrltyevygxgqlqvor ; /usr/bin/python3'
Dec 13 03:42:24 compute-0 sudo[73623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:24 compute-0 python3[73625]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:42:24 compute-0 sudo[73623]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:24 compute-0 sudo[73696]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-camlyazbpqeusdtxjltaonfjitvrpfyj ; /usr/bin/python3'
Dec 13 03:42:24 compute-0 sudo[73696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:24 compute-0 python3[73698]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597344.3314984-36309-70371452736994/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:24 compute-0 sudo[73696]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:25 compute-0 sudo[73798]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvjwmgtklosiabimmdusodtwnzrcjznn ; /usr/bin/python3'
Dec 13 03:42:25 compute-0 sudo[73798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:25 compute-0 python3[73800]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:42:25 compute-0 sudo[73798]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:25 compute-0 sudo[73871]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emmjuonombyurholiduywscnzqkskwvo ; /usr/bin/python3'
Dec 13 03:42:25 compute-0 sudo[73871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:26 compute-0 python3[73873]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597345.4352286-36327-72791484720479/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:42:26 compute-0 sudo[73871]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:26 compute-0 sudo[73921]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpkpylkkupmebdmylofknjmlrdgnessp ; /usr/bin/python3'
Dec 13 03:42:26 compute-0 sudo[73921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:26 compute-0 python3[73923]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:26 compute-0 sudo[73921]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:26 compute-0 sudo[73949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emzktsecyvhkrlsbcqubfetyqrjljiqt ; /usr/bin/python3'
Dec 13 03:42:26 compute-0 sudo[73949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:26 compute-0 python3[73951]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:26 compute-0 sudo[73949]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:26 compute-0 sudo[73977]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcrfzksvdimdrtguamufetfvpjfavhsd ; /usr/bin/python3'
Dec 13 03:42:26 compute-0 sudo[73977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:27 compute-0 python3[73979]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:42:27 compute-0 sudo[73977]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:27 compute-0 sudo[74005]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fifimcawjtafbhzdrselpacwwtwatoak ; /usr/bin/python3'
Dec 13 03:42:27 compute-0 sudo[74005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:42:27 compute-0 python3[74007]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:42:27 compute-0 sshd-session[74011]: Accepted publickey for ceph-admin from 192.168.122.100 port 44242 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:42:27 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 03:42:27 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 03:42:27 compute-0 systemd-logind[796]: New session 19 of user ceph-admin.
Dec 13 03:42:27 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 03:42:27 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 13 03:42:27 compute-0 systemd[74015]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:42:27 compute-0 systemd[74015]: Queued start job for default target Main User Target.
Dec 13 03:42:27 compute-0 systemd[74015]: Created slice User Application Slice.
Dec 13 03:42:27 compute-0 systemd[74015]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 03:42:27 compute-0 systemd[74015]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 03:42:27 compute-0 systemd[74015]: Reached target Paths.
Dec 13 03:42:27 compute-0 systemd[74015]: Reached target Timers.
Dec 13 03:42:27 compute-0 systemd[74015]: Starting D-Bus User Message Bus Socket...
Dec 13 03:42:27 compute-0 systemd[74015]: Starting Create User's Volatile Files and Directories...
Dec 13 03:42:27 compute-0 systemd[74015]: Finished Create User's Volatile Files and Directories.
Dec 13 03:42:27 compute-0 systemd[74015]: Listening on D-Bus User Message Bus Socket.
Dec 13 03:42:27 compute-0 systemd[74015]: Reached target Sockets.
Dec 13 03:42:27 compute-0 systemd[74015]: Reached target Basic System.
Dec 13 03:42:27 compute-0 systemd[74015]: Reached target Main User Target.
Dec 13 03:42:27 compute-0 systemd[74015]: Startup finished in 118ms.
Dec 13 03:42:27 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 13 03:42:27 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Dec 13 03:42:27 compute-0 sshd-session[74011]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:42:27 compute-0 sudo[74031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Dec 13 03:42:27 compute-0 sudo[74031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:42:27 compute-0 sudo[74031]: pam_unix(sudo:session): session closed for user root
Dec 13 03:42:27 compute-0 sshd-session[74030]: Received disconnect from 192.168.122.100 port 44242:11: disconnected by user
Dec 13 03:42:27 compute-0 sshd-session[74030]: Disconnected from user ceph-admin 192.168.122.100 port 44242
Dec 13 03:42:27 compute-0 sshd-session[74011]: pam_unix(sshd:session): session closed for user ceph-admin
Dec 13 03:42:27 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Dec 13 03:42:27 compute-0 systemd-logind[796]: Session 19 logged out. Waiting for processes to exit.
Dec 13 03:42:27 compute-0 systemd-logind[796]: Removed session 19.
Dec 13 03:42:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:42:28 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:42:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3970802097-lower\x2dmapped.mount: Deactivated successfully.
Dec 13 03:42:38 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Dec 13 03:42:38 compute-0 systemd[74015]: Activating special unit Exit the Session...
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped target Main User Target.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped target Basic System.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped target Paths.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped target Sockets.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped target Timers.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped Mark boot as successful after the user session has run 2 minutes.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 03:42:38 compute-0 systemd[74015]: Closed D-Bus User Message Bus Socket.
Dec 13 03:42:38 compute-0 systemd[74015]: Stopped Create User's Volatile Files and Directories.
Dec 13 03:42:38 compute-0 systemd[74015]: Removed slice User Application Slice.
Dec 13 03:42:38 compute-0 systemd[74015]: Reached target Shutdown.
Dec 13 03:42:38 compute-0 systemd[74015]: Finished Exit the Session.
Dec 13 03:42:38 compute-0 systemd[74015]: Reached target Exit the Session.
Dec 13 03:42:38 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Dec 13 03:42:38 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Dec 13 03:42:38 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Dec 13 03:42:38 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Dec 13 03:42:38 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Dec 13 03:42:38 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Dec 13 03:42:38 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Dec 13 03:43:03 compute-0 podman[74108]: 2025-12-13 03:43:03.188354229 +0000 UTC m=+35.019474670 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:03 compute-0 podman[74196]: 2025-12-13 03:43:03.245321288 +0000 UTC m=+0.028436199 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:03 compute-0 podman[74196]: 2025-12-13 03:43:03.462981127 +0000 UTC m=+0.246096018 container create 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:03 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec 13 03:43:03 compute-0 systemd[1]: Started libpod-conmon-933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5.scope.
Dec 13 03:43:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:03 compute-0 podman[74196]: 2025-12-13 03:43:03.827208039 +0000 UTC m=+0.610322960 container init 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:03 compute-0 podman[74196]: 2025-12-13 03:43:03.834754005 +0000 UTC m=+0.617868896 container start 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:43:03 compute-0 jovial_wu[74212]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 03:43:03 compute-0 systemd[1]: libpod-933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5.scope: Deactivated successfully.
Dec 13 03:43:04 compute-0 podman[74196]: 2025-12-13 03:43:04.072463274 +0000 UTC m=+0.855578165 container attach 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 03:43:04 compute-0 podman[74196]: 2025-12-13 03:43:04.073239345 +0000 UTC m=+0.856354256 container died 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-4210c1fde44c7106285a618ee67de0b2585bd7c5cc0d80f74ea6ebe88413adbe-merged.mount: Deactivated successfully.
Dec 13 03:43:04 compute-0 podman[74196]: 2025-12-13 03:43:04.49937003 +0000 UTC m=+1.282484911 container remove 933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5 (image=quay.io/ceph/ceph:v20, name=jovial_wu, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:04 compute-0 systemd[1]: libpod-conmon-933c8c4e7f8bbc4cb2507a78ab8e77d3b9d13b833ba52131766a6b86d7d506d5.scope: Deactivated successfully.
Dec 13 03:43:04 compute-0 podman[74229]: 2025-12-13 03:43:04.554612303 +0000 UTC m=+0.025782156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:04 compute-0 podman[74229]: 2025-12-13 03:43:04.756786138 +0000 UTC m=+0.227955971 container create 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:05 compute-0 systemd[1]: Started libpod-conmon-640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252.scope.
Dec 13 03:43:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:05 compute-0 podman[74229]: 2025-12-13 03:43:05.260092528 +0000 UTC m=+0.731262391 container init 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:43:05 compute-0 podman[74229]: 2025-12-13 03:43:05.26641337 +0000 UTC m=+0.737583203 container start 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:05 compute-0 vigorous_wu[74245]: 167 167
Dec 13 03:43:05 compute-0 systemd[1]: libpod-640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252.scope: Deactivated successfully.
Dec 13 03:43:05 compute-0 podman[74229]: 2025-12-13 03:43:05.551400683 +0000 UTC m=+1.022570536 container attach 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:43:05 compute-0 podman[74229]: 2025-12-13 03:43:05.552136453 +0000 UTC m=+1.023306316 container died 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c3d520852eff3f2b46d18f89cc6ba0eadd893544d1ab4e6a9f5ec71ce31716c-merged.mount: Deactivated successfully.
Dec 13 03:43:05 compute-0 podman[74229]: 2025-12-13 03:43:05.969434107 +0000 UTC m=+1.440603950 container remove 640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252 (image=quay.io/ceph/ceph:v20, name=vigorous_wu, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:05 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:06 compute-0 systemd[1]: libpod-conmon-640f1b1810f43207b39240cb99b77db56165e2e6abfbfa1a8b95e34fbdef0252.scope: Deactivated successfully.
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.046418164 +0000 UTC m=+0.055539792 container create c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.013463903 +0000 UTC m=+0.022585581 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:06 compute-0 systemd[1]: Started libpod-conmon-c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9.scope.
Dec 13 03:43:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.266534261 +0000 UTC m=+0.275655909 container init c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.275159357 +0000 UTC m=+0.284280985 container start c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.287640568 +0000 UTC m=+0.296762216 container attach c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:06 compute-0 silly_boyd[74276]: AQDK4DxpwU+ZERAA1JOGz2Dr0YhCd+p+P9pl0w==
Dec 13 03:43:06 compute-0 systemd[1]: libpod-c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9.scope: Deactivated successfully.
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.298785803 +0000 UTC m=+0.307907431 container died c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:06 compute-0 podman[74261]: 2025-12-13 03:43:06.445137441 +0000 UTC m=+0.454259069 container remove c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9 (image=quay.io/ceph/ceph:v20, name=silly_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:06 compute-0 systemd[1]: libpod-conmon-c515e7d63b0e26430bb6d82121ba1cf3f98debe29da01eee2cdfe01628e23ff9.scope: Deactivated successfully.
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.487558962 +0000 UTC m=+0.023944877 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.694748644 +0000 UTC m=+0.231134539 container create 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:43:06 compute-0 systemd[1]: Started libpod-conmon-28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb.scope.
Dec 13 03:43:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.811588853 +0000 UTC m=+0.347974768 container init 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.816799635 +0000 UTC m=+0.353185520 container start 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.821632248 +0000 UTC m=+0.358018143 container attach 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:43:06 compute-0 exciting_shaw[74312]: AQDK4Dxp3qzSMRAA7n9JpKGCn5bJb//F4dbmTg==
Dec 13 03:43:06 compute-0 systemd[1]: libpod-28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb.scope: Deactivated successfully.
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.839393514 +0000 UTC m=+0.375779399 container died 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:06 compute-0 podman[74296]: 2025-12-13 03:43:06.873188149 +0000 UTC m=+0.409574034 container remove 28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb (image=quay.io/ceph/ceph:v20, name=exciting_shaw, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 03:43:06 compute-0 systemd[1]: libpod-conmon-28692ea58c0732e66e4bf6bc6725e22b1f9ab8ac008209b383ddbe4218ca40eb.scope: Deactivated successfully.
Dec 13 03:43:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-55cb08a6c1c90bb66dae9fe67f49abfb884fd4869c4660adca352141c4d665d6-merged.mount: Deactivated successfully.
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:06.91815404 +0000 UTC m=+0.023374511 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.424919163 +0000 UTC m=+0.530139654 container create 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:07 compute-0 systemd[1]: Started libpod-conmon-098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1.scope.
Dec 13 03:43:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.660469442 +0000 UTC m=+0.765689913 container init 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.667277459 +0000 UTC m=+0.772497910 container start 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:43:07 compute-0 quizzical_meninsky[74347]: AQDL4DxpyyEGKRAAvakz1ZcXTz5p19l5i1wssg==
Dec 13 03:43:07 compute-0 systemd[1]: libpod-098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1.scope: Deactivated successfully.
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.824474922 +0000 UTC m=+0.929695403 container attach 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.824955865 +0000 UTC m=+0.930176346 container died 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-98e1ef4467c28c54560bf847f710c65acd92cf620e42810e3cbaa66a714fbb8c-merged.mount: Deactivated successfully.
Dec 13 03:43:07 compute-0 podman[74331]: 2025-12-13 03:43:07.865400703 +0000 UTC m=+0.970621154 container remove 098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1 (image=quay.io/ceph/ceph:v20, name=quizzical_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:07 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:07 compute-0 systemd[1]: libpod-conmon-098c3d26b525680d314d34bd9af9905992778dca205ed4d76e7b8bf37f3789c1.scope: Deactivated successfully.
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.009381405 +0000 UTC m=+0.118139636 container create 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:07.91791596 +0000 UTC m=+0.026674211 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:08 compute-0 systemd[1]: Started libpod-conmon-4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085.scope.
Dec 13 03:43:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceae7a000e0d57a1b434516bf489e616e508c669f4431b7ddc04d9b1e9d7cb2c/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.192468577 +0000 UTC m=+0.301226808 container init 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.198153332 +0000 UTC m=+0.306911563 container start 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.204921198 +0000 UTC m=+0.313679459 container attach 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:43:08 compute-0 reverent_margulis[74381]: /usr/bin/monmaptool: monmap file /tmp/monmap
Dec 13 03:43:08 compute-0 reverent_margulis[74381]: setting min_mon_release = tentacle
Dec 13 03:43:08 compute-0 reverent_margulis[74381]: /usr/bin/monmaptool: set fsid to 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:08 compute-0 reverent_margulis[74381]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Dec 13 03:43:08 compute-0 systemd[1]: libpod-4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085.scope: Deactivated successfully.
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.23310051 +0000 UTC m=+0.341858751 container died 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceae7a000e0d57a1b434516bf489e616e508c669f4431b7ddc04d9b1e9d7cb2c-merged.mount: Deactivated successfully.
Dec 13 03:43:08 compute-0 podman[74364]: 2025-12-13 03:43:08.615017594 +0000 UTC m=+0.723775845 container remove 4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085 (image=quay.io/ceph/ceph:v20, name=reverent_margulis, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:43:08 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:08 compute-0 systemd[1]: libpod-conmon-4f205cc2d5b5be2f6851ac2d661992d4446736d5e37ff73075acf28dc12d9085.scope: Deactivated successfully.
Dec 13 03:43:08 compute-0 podman[74402]: 2025-12-13 03:43:08.659795721 +0000 UTC m=+0.023515445 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:08 compute-0 podman[74402]: 2025-12-13 03:43:08.792957157 +0000 UTC m=+0.156676871 container create 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:43:08 compute-0 systemd[1]: Started libpod-conmon-44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33.scope.
Dec 13 03:43:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f7de47f38b2b8d1247eefd5f90bd5917acb81e5eb540f8fc44f1d010b1ead0/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f7de47f38b2b8d1247eefd5f90bd5917acb81e5eb540f8fc44f1d010b1ead0/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f7de47f38b2b8d1247eefd5f90bd5917acb81e5eb540f8fc44f1d010b1ead0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f7de47f38b2b8d1247eefd5f90bd5917acb81e5eb540f8fc44f1d010b1ead0/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:08 compute-0 podman[74402]: 2025-12-13 03:43:08.971312099 +0000 UTC m=+0.335031843 container init 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:43:08 compute-0 podman[74402]: 2025-12-13 03:43:08.977732755 +0000 UTC m=+0.341452479 container start 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:09 compute-0 podman[74402]: 2025-12-13 03:43:09.008371104 +0000 UTC m=+0.372090848 container attach 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:43:09 compute-0 systemd[1]: libpod-44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33.scope: Deactivated successfully.
Dec 13 03:43:09 compute-0 podman[74402]: 2025-12-13 03:43:09.07325221 +0000 UTC m=+0.436971934 container died 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-05f7de47f38b2b8d1247eefd5f90bd5917acb81e5eb540f8fc44f1d010b1ead0-merged.mount: Deactivated successfully.
Dec 13 03:43:09 compute-0 podman[74402]: 2025-12-13 03:43:09.251299014 +0000 UTC m=+0.615018738 container remove 44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33 (image=quay.io/ceph/ceph:v20, name=gifted_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:09 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:09 compute-0 systemd[1]: libpod-conmon-44176e7178101e3eda79a26f2b7b6069032cf835473ae13948d5a72b70a02a33.scope: Deactivated successfully.
Dec 13 03:43:09 compute-0 systemd[1]: Reloading.
Dec 13 03:43:09 compute-0 systemd-rc-local-generator[74488]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:09 compute-0 systemd-sysv-generator[74491]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:09 compute-0 systemd[1]: Reloading.
Dec 13 03:43:09 compute-0 systemd-sysv-generator[74526]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:09 compute-0 systemd-rc-local-generator[74523]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:09 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Dec 13 03:43:09 compute-0 systemd[1]: Reloading.
Dec 13 03:43:10 compute-0 systemd-rc-local-generator[74560]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:10 compute-0 systemd-sysv-generator[74564]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:10 compute-0 systemd[1]: Reached target Ceph cluster 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:10 compute-0 systemd[1]: Reloading.
Dec 13 03:43:10 compute-0 systemd-rc-local-generator[74599]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:10 compute-0 systemd-sysv-generator[74602]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:10 compute-0 systemd[1]: Reloading.
Dec 13 03:43:10 compute-0 systemd-rc-local-generator[74640]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:10 compute-0 systemd-sysv-generator[74644]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:10 compute-0 systemd[1]: Created slice Slice /system/ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:10 compute-0 systemd[1]: Reached target System Time Set.
Dec 13 03:43:10 compute-0 systemd[1]: Reached target System Time Synchronized.
Dec 13 03:43:10 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:43:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:10 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:11 compute-0 podman[74695]: 2025-12-13 03:43:11.099175654 +0000 UTC m=+0.113692364 container create f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474 (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:11 compute-0 podman[74695]: 2025-12-13 03:43:11.01135507 +0000 UTC m=+0.025871800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20c5826a6260e04cf3c42b1509c8d00a54d12d39b3a1532e01cf975f5cc1b9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20c5826a6260e04cf3c42b1509c8d00a54d12d39b3a1532e01cf975f5cc1b9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20c5826a6260e04cf3c42b1509c8d00a54d12d39b3a1532e01cf975f5cc1b9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a20c5826a6260e04cf3c42b1509c8d00a54d12d39b3a1532e01cf975f5cc1b9d/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 podman[74695]: 2025-12-13 03:43:11.312546255 +0000 UTC m=+0.327062985 container init f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474 (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:11 compute-0 podman[74695]: 2025-12-13 03:43:11.318096037 +0000 UTC m=+0.332612747 container start f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474 (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:43:11 compute-0 bash[74695]: f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474
Dec 13 03:43:11 compute-0 systemd[1]: Started Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:11 compute-0 ceph-mon[74715]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: pidfile_write: ignore empty --pid-file
Dec 13 03:43:11 compute-0 ceph-mon[74715]: load: jerasure load: lrc 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Git sha 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: DB SUMMARY
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: DB Session ID:  EN97B35DDW06KPB4LWI5
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                                     Options.env: 0x560c267ad440
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                                Options.info_log: 0x560c282873e0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                                 Options.wal_dir: 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                    Options.write_buffer_manager: 0x560c28206140
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                               Options.row_cache: None
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                              Options.wal_filter: None
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.wal_compression: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.max_background_jobs: 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Compression algorithms supported:
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kZSTD supported: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:           Options.merge_operator: 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:        Options.compaction_filter: None
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x560c28212600)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x560c281f78d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.compression: NoCompression
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.num_levels: 7
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597391371431, "job": 1, "event": "recovery_started", "wal_files": [4]}
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597391435498, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "EN97B35DDW06KPB4LWI5", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597391435682, "job": 1, "event": "recovery_finished"}
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.45113443 +0000 UTC m=+0.069649288 container create 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x560c28224e00
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: DB pointer 0x560c28370000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:43:11 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.064       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x560c281f78d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 9.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 03:43:11 compute-0 ceph-mon[74715]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@-1(???) e0 preinit fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(probing) e0 win_standalone_election
Dec 13 03:43:11 compute-0 ceph-mon[74715]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Dec 13 03:43:11 compute-0 systemd[1]: Started libpod-conmon-812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a.scope.
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.406492378 +0000 UTC m=+0.025007236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0719288481b0b94b46976ca64d8c8867ce05bb56d2a929c22433c68b1f196a66/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0719288481b0b94b46976ca64d8c8867ce05bb56d2a929c22433c68b1f196a66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0719288481b0b94b46976ca64d8c8867ce05bb56d2a929c22433c68b1f196a66/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.641656656 +0000 UTC m=+0.260171534 container init 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 03:43:11 compute-0 ceph-mon[74715]: paxos.0).electionLogic(2) init, last seen epoch 2
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.651157116 +0000 UTC m=+0.269671964 container start 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : created 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2025-12-13T03:43:09.019370Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,os=Linux}
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.655900226 +0000 UTC m=+0.274415094 container attach 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).mds e1 new map
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2025-12-13T03:43:11:652968+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mkfs 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 03:43:11 compute-0 ceph-mon[74715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4262554290' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:   cluster:
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     id:     437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     health: HEALTH_OK
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:  
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:   services:
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     mon: 1 daemons, quorum compute-0 (age 0.189643s) [leader: compute-0]
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     mgr: no daemons active
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     osd: 0 osds: 0 up, 0 in
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:  
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:   data:
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     pools:   0 pools, 0 pgs
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     objects: 0 objects, 0 B
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     usage:   0 B used, 0 B / 0 B avail
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:     pgs:     
Dec 13 03:43:11 compute-0 trusting_hellman[74770]:  
Dec 13 03:43:11 compute-0 systemd[1]: libpod-812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a.scope: Deactivated successfully.
Dec 13 03:43:11 compute-0 podman[74722]: 2025-12-13 03:43:11.856077776 +0000 UTC m=+0.474592654 container died 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-0719288481b0b94b46976ca64d8c8867ce05bb56d2a929c22433c68b1f196a66-merged.mount: Deactivated successfully.
Dec 13 03:43:12 compute-0 podman[74722]: 2025-12-13 03:43:12.086973887 +0000 UTC m=+0.705488735 container remove 812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a (image=quay.io/ceph/ceph:v20, name=trusting_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:43:12 compute-0 systemd[1]: libpod-conmon-812b656b23921f35e5e9d6dfcaad6cd0a4fc92449a6c8859d39ff2dffe6ad17a.scope: Deactivated successfully.
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.172474268 +0000 UTC m=+0.063901221 container create 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:43:12 compute-0 systemd[1]: Started libpod-conmon-9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd.scope.
Dec 13 03:43:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d0d8d9d6356273cd91d766ca73554046018c6034e4905a872a47ed7859581/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d0d8d9d6356273cd91d766ca73554046018c6034e4905a872a47ed7859581/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d0d8d9d6356273cd91d766ca73554046018c6034e4905a872a47ed7859581/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d28d0d8d9d6356273cd91d766ca73554046018c6034e4905a872a47ed7859581/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.132298298 +0000 UTC m=+0.023725271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.28656499 +0000 UTC m=+0.177991964 container init 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.296366719 +0000 UTC m=+0.187793662 container start 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.312957564 +0000 UTC m=+0.204384537 container attach 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:12 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 03:43:12 compute-0 ceph-mon[74715]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1608711975' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:43:12 compute-0 ceph-mon[74715]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1608711975' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 03:43:12 compute-0 fervent_antonelli[74826]: 
Dec 13 03:43:12 compute-0 fervent_antonelli[74826]: [global]
Dec 13 03:43:12 compute-0 fervent_antonelli[74826]:         fsid = 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:12 compute-0 fervent_antonelli[74826]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 03:43:12 compute-0 fervent_antonelli[74826]:         osd_crush_chooseleaf_type = 0
Dec 13 03:43:12 compute-0 systemd[1]: libpod-9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd.scope: Deactivated successfully.
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.534826897 +0000 UTC m=+0.426253840 container died 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:43:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-d28d0d8d9d6356273cd91d766ca73554046018c6034e4905a872a47ed7859581-merged.mount: Deactivated successfully.
Dec 13 03:43:12 compute-0 podman[74809]: 2025-12-13 03:43:12.655112791 +0000 UTC m=+0.546539744 container remove 9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd (image=quay.io/ceph/ceph:v20, name=fervent_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:12 compute-0 systemd[1]: libpod-conmon-9d00d51ef8d37ca65f64f820761e999d3f0e71ddc93c00e9fafc4f39127f33fd.scope: Deactivated successfully.
Dec 13 03:43:12 compute-0 ceph-mon[74715]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:12 compute-0 ceph-mon[74715]: monmap epoch 1
Dec 13 03:43:12 compute-0 ceph-mon[74715]: fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:12 compute-0 ceph-mon[74715]: last_changed 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:12 compute-0 ceph-mon[74715]: created 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:12 compute-0 ceph-mon[74715]: min_mon_release 20 (tentacle)
Dec 13 03:43:12 compute-0 ceph-mon[74715]: election_strategy: 1
Dec 13 03:43:12 compute-0 ceph-mon[74715]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 03:43:12 compute-0 ceph-mon[74715]: fsmap 
Dec 13 03:43:12 compute-0 ceph-mon[74715]: osdmap e1: 0 total, 0 up, 0 in
Dec 13 03:43:12 compute-0 ceph-mon[74715]: mgrmap e1: no daemons active
Dec 13 03:43:12 compute-0 ceph-mon[74715]: from='client.? 192.168.122.100:0/4262554290' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 03:43:12 compute-0 ceph-mon[74715]: from='client.? 192.168.122.100:0/1608711975' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:43:12 compute-0 ceph-mon[74715]: from='client.? 192.168.122.100:0/1608711975' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 03:43:12 compute-0 podman[74865]: 2025-12-13 03:43:12.726233808 +0000 UTC m=+0.051657675 container create e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:12 compute-0 podman[74865]: 2025-12-13 03:43:12.696094543 +0000 UTC m=+0.021518440 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:12 compute-0 systemd[1]: Started libpod-conmon-e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7.scope.
Dec 13 03:43:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d354fc505f848a94307f576e706d5bf6c6e776f9aa1619b136082207e3c895f4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d354fc505f848a94307f576e706d5bf6c6e776f9aa1619b136082207e3c895f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d354fc505f848a94307f576e706d5bf6c6e776f9aa1619b136082207e3c895f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d354fc505f848a94307f576e706d5bf6c6e776f9aa1619b136082207e3c895f4/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:12 compute-0 podman[74865]: 2025-12-13 03:43:12.961967892 +0000 UTC m=+0.287391779 container init e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:43:12 compute-0 podman[74865]: 2025-12-13 03:43:12.96957922 +0000 UTC m=+0.295003087 container start e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:12 compute-0 podman[74865]: 2025-12-13 03:43:12.997072652 +0000 UTC m=+0.322496519 container attach e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:43:13 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:43:13 compute-0 ceph-mon[74715]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2320051619' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:43:13 compute-0 systemd[1]: libpod-e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7.scope: Deactivated successfully.
Dec 13 03:43:13 compute-0 podman[74865]: 2025-12-13 03:43:13.160064055 +0000 UTC m=+0.485487932 container died e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:43:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d354fc505f848a94307f576e706d5bf6c6e776f9aa1619b136082207e3c895f4-merged.mount: Deactivated successfully.
Dec 13 03:43:13 compute-0 podman[74865]: 2025-12-13 03:43:13.32862088 +0000 UTC m=+0.654044747 container remove e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7 (image=quay.io/ceph/ceph:v20, name=boring_jackson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:43:13 compute-0 systemd[1]: libpod-conmon-e1ba620289aece8406472989992301947c6bd78b64367816efd2e8558f89fcf7.scope: Deactivated successfully.
Dec 13 03:43:13 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:43:13 compute-0 ceph-mon[74715]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 03:43:13 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 03:43:13 compute-0 ceph-mon[74715]: mon.compute-0@0(leader) e1 shutdown
Dec 13 03:43:13 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0[74711]: 2025-12-13T03:43:13.673+0000 7f884eb00640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Dec 13 03:43:13 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0[74711]: 2025-12-13T03:43:13.673+0000 7f884eb00640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Dec 13 03:43:13 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 03:43:13 compute-0 ceph-mon[74715]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 03:43:13 compute-0 podman[74951]: 2025-12-13 03:43:13.858601379 +0000 UTC m=+0.388691722 container died f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474 (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a20c5826a6260e04cf3c42b1509c8d00a54d12d39b3a1532e01cf975f5cc1b9d-merged.mount: Deactivated successfully.
Dec 13 03:43:14 compute-0 podman[74951]: 2025-12-13 03:43:14.369199358 +0000 UTC m=+0.899289701 container remove f46386d3d0917815316dc1ae15e3bc70549e023592741330ffaf2c75bfc11474 (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:43:14 compute-0 bash[74951]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0
Dec 13 03:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:14 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec 13 03:43:14 compute-0 systemd[1]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mon.compute-0.service: Deactivated successfully.
Dec 13 03:43:14 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:14 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:43:14 compute-0 podman[75051]: 2025-12-13 03:43:14.721992106 +0000 UTC m=+0.048471698 container create 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:14 compute-0 podman[75051]: 2025-12-13 03:43:14.69438556 +0000 UTC m=+0.020865182 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd8fc5f68fdd86c9909803e31de2c7ee012ffd75aa158953730ca89a9baf70b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd8fc5f68fdd86c9909803e31de2c7ee012ffd75aa158953730ca89a9baf70b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd8fc5f68fdd86c9909803e31de2c7ee012ffd75aa158953730ca89a9baf70b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dcd8fc5f68fdd86c9909803e31de2c7ee012ffd75aa158953730ca89a9baf70b/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:14 compute-0 podman[75051]: 2025-12-13 03:43:14.886702346 +0000 UTC m=+0.213181998 container init 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:14 compute-0 podman[75051]: 2025-12-13 03:43:14.8948905 +0000 UTC m=+0.221370082 container start 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:14 compute-0 bash[75051]: 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe
Dec 13 03:43:14 compute-0 systemd[1]: Started Ceph mon.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:14 compute-0 ceph-mon[75071]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:43:14 compute-0 ceph-mon[75071]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: pidfile_write: ignore empty --pid-file
Dec 13 03:43:14 compute-0 ceph-mon[75071]: load: jerasure load: lrc 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Git sha 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: DB SUMMARY
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: DB Session ID:  20WVHNV90XXY2OOY7BGG
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60237 ; 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                                     Options.env: 0x556f7b877440
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                                      Options.fs: PosixFileSystem
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                                Options.info_log: 0x556f7ce1be80
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                                 Options.wal_dir: 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                    Options.write_buffer_manager: 0x556f7ce66140
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                               Options.row_cache: None
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                              Options.wal_filter: None
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.wal_compression: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.max_background_jobs: 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.max_total_wal_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:       Options.compaction_readahead_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Compression algorithms supported:
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kZSTD supported: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:           Options.merge_operator: 
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:        Options.compaction_filter: None
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556f7ce72a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x556f7ce578d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:        Options.write_buffer_size: 33554432
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:  Options.max_write_buffer_number: 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.compression: NoCompression
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.num_levels: 7
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597394943671, "job": 1, "event": "recovery_started", "wal_files": [9]}
Dec 13 03:43:14 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597394999651, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59958, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58436, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55788, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597394, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597394999904, "job": 1, "event": "recovery_finished"}
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.009343303 +0000 UTC m=+0.079469757 container create cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:14.961146854 +0000 UTC m=+0.031273328 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556f7ce84e00
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: DB pointer 0x556f7cfce000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:43:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.2 total, 0.2 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.29 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.29 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 03:43:15 compute-0 ceph-mon[75071]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???) e1 preinit fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).mds e1 new map
Dec 13 03:43:15 compute-0 systemd[1]: Started libpod-conmon-cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17.scope.
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2025-12-13T03:43:11:652968+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@0(probing) e1 win_standalone_election
Dec 13 03:43:15 compute-0 ceph-mon[75071]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : monmap epoch 1
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : last_changed 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : created 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : election_strategy: 1
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsmap 
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Dec 13 03:43:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Dec 13 03:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97f5c998d85f2f77ce5acee8a7b3fb4d3e8441d07341510ae00580aeaf7f4ea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97f5c998d85f2f77ce5acee8a7b3fb4d3e8441d07341510ae00580aeaf7f4ea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b97f5c998d85f2f77ce5acee8a7b3fb4d3e8441d07341510ae00580aeaf7f4ea/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.212853924 +0000 UTC m=+0.282980398 container init cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.219019404 +0000 UTC m=+0.289145858 container start cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: monmap epoch 1
Dec 13 03:43:15 compute-0 ceph-mon[75071]: fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:15 compute-0 ceph-mon[75071]: last_changed 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: created 2025-12-13T03:43:08.228709+0000
Dec 13 03:43:15 compute-0 ceph-mon[75071]: min_mon_release 20 (tentacle)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: election_strategy: 1
Dec 13 03:43:15 compute-0 ceph-mon[75071]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Dec 13 03:43:15 compute-0 ceph-mon[75071]: fsmap 
Dec 13 03:43:15 compute-0 ceph-mon[75071]: osdmap e1: 0 total, 0 up, 0 in
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mgrmap e1: no daemons active
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.341956118 +0000 UTC m=+0.412082572 container attach cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:43:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Dec 13 03:43:15 compute-0 systemd[1]: libpod-cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17.scope: Deactivated successfully.
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.471148026 +0000 UTC m=+0.541274580 container died cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b97f5c998d85f2f77ce5acee8a7b3fb4d3e8441d07341510ae00580aeaf7f4ea-merged.mount: Deactivated successfully.
Dec 13 03:43:15 compute-0 podman[75072]: 2025-12-13 03:43:15.693811862 +0000 UTC m=+0.763938326 container remove cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17 (image=quay.io/ceph/ceph:v20, name=cool_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 03:43:15 compute-0 systemd[1]: libpod-conmon-cf7246c97de180f87510327f060dcbb2fc43ecc6b899e090b198d03be4387d17.scope: Deactivated successfully.
Dec 13 03:43:15 compute-0 podman[75164]: 2025-12-13 03:43:15.815505265 +0000 UTC m=+0.096085523 container create da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:15 compute-0 podman[75164]: 2025-12-13 03:43:15.746633768 +0000 UTC m=+0.027214026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:15 compute-0 systemd[1]: Started libpod-conmon-da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1.scope.
Dec 13 03:43:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2c2bbf0e29b01b719a64877a793e89d2967279e4cfe85f85837901d57d14bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2c2bbf0e29b01b719a64877a793e89d2967279e4cfe85f85837901d57d14bc/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed2c2bbf0e29b01b719a64877a793e89d2967279e4cfe85f85837901d57d14bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:15 compute-0 podman[75164]: 2025-12-13 03:43:15.901248271 +0000 UTC m=+0.181828539 container init da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:15 compute-0 podman[75164]: 2025-12-13 03:43:15.908544871 +0000 UTC m=+0.189125139 container start da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:15 compute-0 podman[75164]: 2025-12-13 03:43:15.912192791 +0000 UTC m=+0.192773049 container attach da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:43:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Dec 13 03:43:16 compute-0 systemd[1]: libpod-da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1.scope: Deactivated successfully.
Dec 13 03:43:16 compute-0 podman[75164]: 2025-12-13 03:43:16.128756829 +0000 UTC m=+0.409337087 container died da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed2c2bbf0e29b01b719a64877a793e89d2967279e4cfe85f85837901d57d14bc-merged.mount: Deactivated successfully.
Dec 13 03:43:16 compute-0 podman[75164]: 2025-12-13 03:43:16.286873298 +0000 UTC m=+0.567453586 container remove da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1 (image=quay.io/ceph/ceph:v20, name=wonderful_wilbur, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:16 compute-0 systemd[1]: libpod-conmon-da8ba20e521008a3d9e28370b1dba2b0d128282ed3daaef22f44f8021bd0a7e1.scope: Deactivated successfully.
Dec 13 03:43:16 compute-0 systemd[1]: Reloading.
Dec 13 03:43:16 compute-0 systemd-rc-local-generator[75245]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:16 compute-0 systemd-sysv-generator[75248]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:16 compute-0 systemd[1]: Reloading.
Dec 13 03:43:16 compute-0 systemd-rc-local-generator[75285]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:43:16 compute-0 systemd-sysv-generator[75288]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:43:17 compute-0 systemd[1]: Starting Ceph mgr.compute-0.gsxkyu for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:43:17 compute-0 podman[75341]: 2025-12-13 03:43:17.376412266 +0000 UTC m=+0.043038135 container create d213c0a518897cf272ba29cc4b1ed6d1ae1480bd182192db0dde4304937fe76a (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f607541e4ae0ebe775a1ecf2df4d1aa7fbb4f317f67a2abf56325f1ff6957a07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f607541e4ae0ebe775a1ecf2df4d1aa7fbb4f317f67a2abf56325f1ff6957a07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f607541e4ae0ebe775a1ecf2df4d1aa7fbb4f317f67a2abf56325f1ff6957a07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f607541e4ae0ebe775a1ecf2df4d1aa7fbb4f317f67a2abf56325f1ff6957a07/merged/var/lib/ceph/mgr/ceph-compute-0.gsxkyu supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:17 compute-0 podman[75341]: 2025-12-13 03:43:17.443500693 +0000 UTC m=+0.110126562 container init d213c0a518897cf272ba29cc4b1ed6d1ae1480bd182192db0dde4304937fe76a (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:43:17 compute-0 podman[75341]: 2025-12-13 03:43:17.447857166 +0000 UTC m=+0.114483035 container start d213c0a518897cf272ba29cc4b1ed6d1ae1480bd182192db0dde4304937fe76a (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:43:17 compute-0 podman[75341]: 2025-12-13 03:43:17.355787899 +0000 UTC m=+0.022413788 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:17 compute-0 bash[75341]: d213c0a518897cf272ba29cc4b1ed6d1ae1480bd182192db0dde4304937fe76a
Dec 13 03:43:17 compute-0 systemd[1]: Started Ceph mgr.compute-0.gsxkyu for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: pidfile_write: ignore empty --pid-file
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'alerts'
Dec 13 03:43:17 compute-0 podman[75361]: 2025-12-13 03:43:17.552499962 +0000 UTC m=+0.061030297 container create f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:17 compute-0 podman[75361]: 2025-12-13 03:43:17.524414053 +0000 UTC m=+0.032944408 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'balancer'
Dec 13 03:43:17 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'cephadm'
Dec 13 03:43:18 compute-0 systemd[1]: Started libpod-conmon-f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c.scope.
Dec 13 03:43:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891609554bd298a3f13dac1dd214f76a1e3555c31efb46d4a185800da353616d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891609554bd298a3f13dac1dd214f76a1e3555c31efb46d4a185800da353616d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/891609554bd298a3f13dac1dd214f76a1e3555c31efb46d4a185800da353616d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:18 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'crash'
Dec 13 03:43:18 compute-0 podman[75361]: 2025-12-13 03:43:18.735441559 +0000 UTC m=+1.243971944 container init f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:43:18 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'dashboard'
Dec 13 03:43:18 compute-0 podman[75361]: 2025-12-13 03:43:18.742927201 +0000 UTC m=+1.251457526 container start f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:43:18 compute-0 podman[75361]: 2025-12-13 03:43:18.770212667 +0000 UTC m=+1.278743002 container attach f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:43:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 03:43:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1077129725' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:18 compute-0 wizardly_banach[75397]: 
Dec 13 03:43:18 compute-0 wizardly_banach[75397]: {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "health": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "status": "HEALTH_OK",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "checks": {},
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "mutes": []
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "election_epoch": 5,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "quorum": [
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         0
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     ],
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "quorum_names": [
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "compute-0"
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     ],
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "quorum_age": 3,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "monmap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "epoch": 1,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "min_mon_release_name": "tentacle",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_mons": 1
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "osdmap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "epoch": 1,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_osds": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_up_osds": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "osd_up_since": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_in_osds": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "osd_in_since": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_remapped_pgs": 0
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "pgmap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "pgs_by_state": [],
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_pgs": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_pools": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_objects": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "data_bytes": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "bytes_used": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "bytes_avail": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "bytes_total": 0
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "fsmap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "epoch": 1,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "btime": "2025-12-13T03:43:11:652968+0000",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "by_rank": [],
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "up:standby": 0
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "mgrmap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "available": false,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "num_standbys": 0,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "modules": [
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:             "iostat",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:             "nfs"
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         ],
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "services": {}
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "servicemap": {
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "epoch": 1,
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "modified": "2025-12-13T03:43:11.655212+0000",
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:         "services": {}
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     },
Dec 13 03:43:18 compute-0 wizardly_banach[75397]:     "progress_events": {}
Dec 13 03:43:18 compute-0 wizardly_banach[75397]: }
Dec 13 03:43:18 compute-0 systemd[1]: libpod-f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c.scope: Deactivated successfully.
Dec 13 03:43:19 compute-0 podman[75361]: 2025-12-13 03:43:19.000192585 +0000 UTC m=+1.508722940 container died f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 03:43:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-891609554bd298a3f13dac1dd214f76a1e3555c31efb46d4a185800da353616d-merged.mount: Deactivated successfully.
Dec 13 03:43:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1077129725' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:19 compute-0 podman[75361]: 2025-12-13 03:43:19.041285153 +0000 UTC m=+1.549815488 container remove f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c (image=quay.io/ceph/ceph:v20, name=wizardly_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:19 compute-0 systemd[1]: libpod-conmon-f3a22d8fd03ed6d7970b6625c5f18a2e037b458524e20c0123802b3ebd594b4c.scope: Deactivated successfully.
Dec 13 03:43:19 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'devicehealth'
Dec 13 03:43:19 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 03:43:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 03:43:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 03:43:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]:   from numpy import show_config as show_numpy_config
Dec 13 03:43:19 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'influx'
Dec 13 03:43:19 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'insights'
Dec 13 03:43:20 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'iostat'
Dec 13 03:43:20 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'k8sevents'
Dec 13 03:43:20 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'localpool'
Dec 13 03:43:20 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 03:43:20 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'mirroring'
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'nfs'
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.096755475 +0000 UTC m=+0.026589597 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.350034045 +0000 UTC m=+0.279868147 container create 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'orchestrator'
Dec 13 03:43:21 compute-0 systemd[1]: Started libpod-conmon-6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31.scope.
Dec 13 03:43:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767040b225bc888211211110ba6b641f30fd45ba7ff2a8de09d89abbd9c52ed9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767040b225bc888211211110ba6b641f30fd45ba7ff2a8de09d89abbd9c52ed9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/767040b225bc888211211110ba6b641f30fd45ba7ff2a8de09d89abbd9c52ed9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.462130662 +0000 UTC m=+0.391964794 container init 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.469352587 +0000 UTC m=+0.399186689 container start 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.474131863 +0000 UTC m=+0.403965965 container attach 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default)
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 03:43:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 03:43:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3659799776' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:21 compute-0 great_pasteur[75462]: 
Dec 13 03:43:21 compute-0 great_pasteur[75462]: {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "health": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "status": "HEALTH_OK",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "checks": {},
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "mutes": []
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "election_epoch": 5,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "quorum": [
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         0
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     ],
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "quorum_names": [
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "compute-0"
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     ],
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "quorum_age": 6,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "monmap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "epoch": 1,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "min_mon_release_name": "tentacle",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_mons": 1
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "osdmap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "epoch": 1,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_osds": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_up_osds": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "osd_up_since": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_in_osds": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "osd_in_since": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_remapped_pgs": 0
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "pgmap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "pgs_by_state": [],
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_pgs": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_pools": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_objects": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "data_bytes": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "bytes_used": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "bytes_avail": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "bytes_total": 0
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "fsmap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "epoch": 1,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "btime": "2025-12-13T03:43:11:652968+0000",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "by_rank": [],
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "up:standby": 0
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "mgrmap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "available": false,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "num_standbys": 0,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "modules": [
Dec 13 03:43:21 compute-0 great_pasteur[75462]:             "iostat",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:             "nfs"
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         ],
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "services": {}
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "servicemap": {
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "epoch": 1,
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "modified": "2025-12-13T03:43:11.655212+0000",
Dec 13 03:43:21 compute-0 great_pasteur[75462]:         "services": {}
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     },
Dec 13 03:43:21 compute-0 great_pasteur[75462]:     "progress_events": {}
Dec 13 03:43:21 compute-0 great_pasteur[75462]: }
Dec 13 03:43:21 compute-0 systemd[1]: libpod-6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31.scope: Deactivated successfully.
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.682725343 +0000 UTC m=+0.612559475 container died 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:43:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3659799776' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'osd_support'
Dec 13 03:43:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-767040b225bc888211211110ba6b641f30fd45ba7ff2a8de09d89abbd9c52ed9-merged.mount: Deactivated successfully.
Dec 13 03:43:21 compute-0 podman[75446]: 2025-12-13 03:43:21.774926954 +0000 UTC m=+0.704761056 container remove 6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31 (image=quay.io/ceph/ceph:v20, name=great_pasteur, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:43:21 compute-0 systemd[1]: libpod-conmon-6420585aa8bf5fece39b5cdf30ff2a6c9ee3824e4f42a966a5112223bd826c31.scope: Deactivated successfully.
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'progress'
Dec 13 03:43:21 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'prometheus'
Dec 13 03:43:22 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rbd_support'
Dec 13 03:43:22 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rgw'
Dec 13 03:43:22 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rook'
Dec 13 03:43:23 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'selftest'
Dec 13 03:43:23 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'smb'
Dec 13 03:43:23 compute-0 podman[75500]: 2025-12-13 03:43:23.855470579 +0000 UTC m=+0.053519113 container create c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:43:23 compute-0 systemd[1]: Started libpod-conmon-c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2.scope.
Dec 13 03:43:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0737b1a2598343b22ad1dd101a498b210b46af1600164f3197d948115cd094/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0737b1a2598343b22ad1dd101a498b210b46af1600164f3197d948115cd094/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b0737b1a2598343b22ad1dd101a498b210b46af1600164f3197d948115cd094/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:23 compute-0 podman[75500]: 2025-12-13 03:43:23.833291538 +0000 UTC m=+0.031340092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:23 compute-0 podman[75500]: 2025-12-13 03:43:23.938711174 +0000 UTC m=+0.136759728 container init c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:43:23 compute-0 podman[75500]: 2025-12-13 03:43:23.943642415 +0000 UTC m=+0.141690949 container start c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:43:23 compute-0 podman[75500]: 2025-12-13 03:43:23.946494386 +0000 UTC m=+0.144542940 container attach c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:43:23 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'snap_schedule'
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'stats'
Dec 13 03:43:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 03:43:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764071504' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:24 compute-0 keen_northcutt[75517]: 
Dec 13 03:43:24 compute-0 keen_northcutt[75517]: {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "health": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "status": "HEALTH_OK",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "checks": {},
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "mutes": []
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "election_epoch": 5,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "quorum": [
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         0
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     ],
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "quorum_names": [
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "compute-0"
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     ],
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "quorum_age": 8,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "monmap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "epoch": 1,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "min_mon_release_name": "tentacle",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_mons": 1
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "osdmap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "epoch": 1,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_osds": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_up_osds": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "osd_up_since": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_in_osds": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "osd_in_since": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_remapped_pgs": 0
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "pgmap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "pgs_by_state": [],
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_pgs": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_pools": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_objects": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "data_bytes": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "bytes_used": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "bytes_avail": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "bytes_total": 0
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "fsmap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "epoch": 1,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "btime": "2025-12-13T03:43:11:652968+0000",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "by_rank": [],
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "up:standby": 0
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "mgrmap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "available": false,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "num_standbys": 0,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "modules": [
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:             "iostat",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:             "nfs"
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         ],
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "services": {}
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "servicemap": {
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "epoch": 1,
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "modified": "2025-12-13T03:43:11.655212+0000",
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:         "services": {}
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     },
Dec 13 03:43:24 compute-0 keen_northcutt[75517]:     "progress_events": {}
Dec 13 03:43:24 compute-0 keen_northcutt[75517]: }
Dec 13 03:43:24 compute-0 systemd[1]: libpod-c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2.scope: Deactivated successfully.
Dec 13 03:43:24 compute-0 podman[75500]: 2025-12-13 03:43:24.142476577 +0000 UTC m=+0.340525111 container died c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'status'
Dec 13 03:43:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0737b1a2598343b22ad1dd101a498b210b46af1600164f3197d948115cd094-merged.mount: Deactivated successfully.
Dec 13 03:43:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2764071504' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:24 compute-0 podman[75500]: 2025-12-13 03:43:24.183690449 +0000 UTC m=+0.381738983 container remove c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2 (image=quay.io/ceph/ceph:v20, name=keen_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:24 compute-0 systemd[1]: libpod-conmon-c512e3eec105ae7f62292a697282487fbf6122da28b130a5e5729dbd3b7d88b2.scope: Deactivated successfully.
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'telegraf'
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'telemetry'
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 03:43:24 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'volumes'
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: ms_deliver_dispatch: unhandled message 0x55a995afd860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gsxkyu
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr handle_mgr_map Activating!
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.gsxkyu(active, starting, since 0.0121997s)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr handle_mgr_map I am now activating
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: balancer
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: crash
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Manager daemon compute-0.gsxkyu is now available
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer INFO root] Starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:43:25
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [balancer INFO root] No pools available
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: devicehealth
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: iostat
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: nfs
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: orchestrator
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: pg_autoscaler
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: progress
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [progress INFO root] Loading...
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [progress INFO root] No stored events to load
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [progress INFO root] Loaded [] historic events
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] recovery thread starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] starting setup
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: rbd_support
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: status
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: telemetry
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] PerfHandler: starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TaskHandler: starting
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} v 0)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: [rbd_support INFO root] setup complete
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Dec 13 03:43:25 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: volumes
Dec 13 03:43:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:25 compute-0 ceph-mon[75071]: Activating manager daemon compute-0.gsxkyu
Dec 13 03:43:25 compute-0 ceph-mon[75071]: mgrmap e2: compute-0.gsxkyu(active, starting, since 0.0121997s)
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: Manager daemon compute-0.gsxkyu is now available
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} : dispatch
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:25 compute-0 ceph-mon[75071]: from='mgr.14102 192.168.122.100:0/718423875' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.gsxkyu(active, since 1.02569s)
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.245179592 +0000 UTC m=+0.039394862 container create 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:43:26 compute-0 systemd[1]: Started libpod-conmon-9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272.scope.
Dec 13 03:43:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e52ebe98476b16f3d3f3ed7ad3fab91ea9988d1a0fd6d764ab5998e5677bdc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e52ebe98476b16f3d3f3ed7ad3fab91ea9988d1a0fd6d764ab5998e5677bdc4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e52ebe98476b16f3d3f3ed7ad3fab91ea9988d1a0fd6d764ab5998e5677bdc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.311915468 +0000 UTC m=+0.106130748 container init 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.317175278 +0000 UTC m=+0.111390548 container start 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.321867331 +0000 UTC m=+0.116082601 container attach 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.229140075 +0000 UTC m=+0.023355365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Dec 13 03:43:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2636761812' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:26 compute-0 compassionate_saha[75648]: 
Dec 13 03:43:26 compute-0 compassionate_saha[75648]: {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "health": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "status": "HEALTH_OK",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "checks": {},
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "mutes": []
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "election_epoch": 5,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "quorum": [
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         0
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     ],
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "quorum_names": [
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "compute-0"
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     ],
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "quorum_age": 11,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "monmap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "epoch": 1,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "min_mon_release_name": "tentacle",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_mons": 1
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "osdmap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "epoch": 1,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_osds": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_up_osds": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "osd_up_since": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_in_osds": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "osd_in_since": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_remapped_pgs": 0
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "pgmap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "pgs_by_state": [],
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_pgs": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_pools": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_objects": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "data_bytes": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "bytes_used": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "bytes_avail": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "bytes_total": 0
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "fsmap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "epoch": 1,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "btime": "2025-12-13T03:43:11:652968+0000",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "by_rank": [],
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "up:standby": 0
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "mgrmap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "available": true,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "num_standbys": 0,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "modules": [
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:             "iostat",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:             "nfs"
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         ],
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "services": {}
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "servicemap": {
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "epoch": 1,
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "modified": "2025-12-13T03:43:11.655212+0000",
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:         "services": {}
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     },
Dec 13 03:43:26 compute-0 compassionate_saha[75648]:     "progress_events": {}
Dec 13 03:43:26 compute-0 compassionate_saha[75648]: }
Dec 13 03:43:26 compute-0 systemd[1]: libpod-9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272.scope: Deactivated successfully.
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.832513577 +0000 UTC m=+0.626728847 container died 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:43:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-7e52ebe98476b16f3d3f3ed7ad3fab91ea9988d1a0fd6d764ab5998e5677bdc4-merged.mount: Deactivated successfully.
Dec 13 03:43:26 compute-0 podman[75632]: 2025-12-13 03:43:26.877760023 +0000 UTC m=+0.671975303 container remove 9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272 (image=quay.io/ceph/ceph:v20, name=compassionate_saha, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:26 compute-0 systemd[1]: libpod-conmon-9c39a1f4b8622125bdeee6506e27872004cb846d5691604b06b6b188a78bb272.scope: Deactivated successfully.
Dec 13 03:43:26 compute-0 podman[75686]: 2025-12-13 03:43:26.93638456 +0000 UTC m=+0.038494625 container create 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:26 compute-0 systemd[1]: Started libpod-conmon-630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c.scope.
Dec 13 03:43:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeedda424034f65c9dba5216e656f24a9d60803f3d14ce190f4643c43302fcb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeedda424034f65c9dba5216e656f24a9d60803f3d14ce190f4643c43302fcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeedda424034f65c9dba5216e656f24a9d60803f3d14ce190f4643c43302fcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aeedda424034f65c9dba5216e656f24a9d60803f3d14ce190f4643c43302fcb/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:27.001515442 +0000 UTC m=+0.103625537 container init 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:27.007396109 +0000 UTC m=+0.109506194 container start 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:27.0113062 +0000 UTC m=+0.113416295 container attach 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:26.919930252 +0000 UTC m=+0.022040337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:27 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:27 compute-0 ceph-mon[75071]: mgrmap e3: compute-0.gsxkyu(active, since 1.02569s)
Dec 13 03:43:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2636761812' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Dec 13 03:43:27 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.gsxkyu(active, since 2s)
Dec 13 03:43:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 03:43:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2171073386' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:43:27 compute-0 compassionate_poitras[75702]: 
Dec 13 03:43:27 compute-0 compassionate_poitras[75702]: [global]
Dec 13 03:43:27 compute-0 compassionate_poitras[75702]:         fsid = 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:27 compute-0 compassionate_poitras[75702]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Dec 13 03:43:27 compute-0 compassionate_poitras[75702]:         osd_crush_chooseleaf_type = 0
Dec 13 03:43:27 compute-0 systemd[1]: libpod-630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c.scope: Deactivated successfully.
Dec 13 03:43:27 compute-0 conmon[75702]: conmon 630c931a504e4da68d85 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c.scope/container/memory.events
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:27.424438285 +0000 UTC m=+0.526548370 container died 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3aeedda424034f65c9dba5216e656f24a9d60803f3d14ce190f4643c43302fcb-merged.mount: Deactivated successfully.
Dec 13 03:43:27 compute-0 podman[75686]: 2025-12-13 03:43:27.457177955 +0000 UTC m=+0.559288020 container remove 630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c (image=quay.io/ceph/ceph:v20, name=compassionate_poitras, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:27 compute-0 systemd[1]: libpod-conmon-630c931a504e4da68d85d7ffc9fb7a10785180faf1801a2acb5afce7e38ec93c.scope: Deactivated successfully.
Dec 13 03:43:27 compute-0 podman[75739]: 2025-12-13 03:43:27.512644282 +0000 UTC m=+0.037187238 container create c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:43:27 compute-0 systemd[1]: Started libpod-conmon-c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d.scope.
Dec 13 03:43:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5c7cd17db62d0ea3fd3fb8034ff98a0c06ab7e5fced6b93b341bce2afabe78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5c7cd17db62d0ea3fd3fb8034ff98a0c06ab7e5fced6b93b341bce2afabe78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6b5c7cd17db62d0ea3fd3fb8034ff98a0c06ab7e5fced6b93b341bce2afabe78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:27 compute-0 podman[75739]: 2025-12-13 03:43:27.572524214 +0000 UTC m=+0.097067190 container init c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:43:27 compute-0 podman[75739]: 2025-12-13 03:43:27.577720172 +0000 UTC m=+0.102263128 container start c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:43:27 compute-0 podman[75739]: 2025-12-13 03:43:27.583363422 +0000 UTC m=+0.107906388 container attach c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:43:27 compute-0 podman[75739]: 2025-12-13 03:43:27.496286707 +0000 UTC m=+0.020829693 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Dec 13 03:43:28 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1638701264' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 03:43:28 compute-0 ceph-mon[75071]: mgrmap e4: compute-0.gsxkyu(active, since 2s)
Dec 13 03:43:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2171073386' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:43:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1638701264' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Dec 13 03:43:29 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:29 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1638701264' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr handle_mgr_map respawning because set of enabled modules changed!
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  e: '/usr/bin/ceph-mgr'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  0: '/usr/bin/ceph-mgr'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  1: '-n'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  2: 'mgr.compute-0.gsxkyu'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  3: '-f'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  4: '--setuser'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  5: 'ceph'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  6: '--setgroup'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  7: 'ceph'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  8: '--default-log-to-file=false'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  9: '--default-log-to-journald=true'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  10: '--default-log-to-stderr=false'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr respawn  exe_path /proc/self/exe
Dec 13 03:43:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.gsxkyu(active, since 5s)
Dec 13 03:43:30 compute-0 systemd[1]: libpod-c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d.scope: Deactivated successfully.
Dec 13 03:43:30 compute-0 podman[75739]: 2025-12-13 03:43:30.120398724 +0000 UTC m=+2.644941690 container died c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-6b5c7cd17db62d0ea3fd3fb8034ff98a0c06ab7e5fced6b93b341bce2afabe78-merged.mount: Deactivated successfully.
Dec 13 03:43:30 compute-0 podman[75739]: 2025-12-13 03:43:30.164083065 +0000 UTC m=+2.688626031 container remove c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d (image=quay.io/ceph/ceph:v20, name=clever_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:43:30 compute-0 systemd[1]: libpod-conmon-c193448ab0a1838b319f868ff4a9a26e11dd70d61fbd5db16d8fce833d0ae29d.scope: Deactivated successfully.
Dec 13 03:43:30 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: ignoring --setuser ceph since I am not root
Dec 13 03:43:30 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: ignoring --setgroup ceph since I am not root
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: pidfile_write: ignore empty --pid-file
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.222427504 +0000 UTC m=+0.039404331 container create e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'alerts'
Dec 13 03:43:30 compute-0 systemd[1]: Started libpod-conmon-e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc.scope.
Dec 13 03:43:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11f3a6957c5a1585a08d9d89786d720e2961fa621f992bb39bb3b15b2db5e1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11f3a6957c5a1585a08d9d89786d720e2961fa621f992bb39bb3b15b2db5e1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a11f3a6957c5a1585a08d9d89786d720e2961fa621f992bb39bb3b15b2db5e1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.292518196 +0000 UTC m=+0.109495043 container init e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.298190657 +0000 UTC m=+0.115167484 container start e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.204789113 +0000 UTC m=+0.021765960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.313617627 +0000 UTC m=+0.130594474 container attach e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'balancer'
Dec 13 03:43:30 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'cephadm'
Dec 13 03:43:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 03:43:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1343384608' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 03:43:30 compute-0 priceless_hermann[75832]: {
Dec 13 03:43:30 compute-0 priceless_hermann[75832]:     "epoch": 5,
Dec 13 03:43:30 compute-0 priceless_hermann[75832]:     "available": true,
Dec 13 03:43:30 compute-0 priceless_hermann[75832]:     "active_name": "compute-0.gsxkyu",
Dec 13 03:43:30 compute-0 priceless_hermann[75832]:     "num_standby": 0
Dec 13 03:43:30 compute-0 priceless_hermann[75832]: }
Dec 13 03:43:30 compute-0 systemd[1]: libpod-e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc.scope: Deactivated successfully.
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.7967327 +0000 UTC m=+0.613709527 container died e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:43:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a11f3a6957c5a1585a08d9d89786d720e2961fa621f992bb39bb3b15b2db5e1-merged.mount: Deactivated successfully.
Dec 13 03:43:30 compute-0 podman[75798]: 2025-12-13 03:43:30.835813391 +0000 UTC m=+0.652790218 container remove e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc (image=quay.io/ceph/ceph:v20, name=priceless_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:43:30 compute-0 systemd[1]: libpod-conmon-e1f1b81d2afcc0fe7669c9bd85a220fb4324ad334c62b57b4295b0922be4ffdc.scope: Deactivated successfully.
Dec 13 03:43:30 compute-0 podman[75874]: 2025-12-13 03:43:30.900817259 +0000 UTC m=+0.045200346 container create 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:30 compute-0 systemd[1]: Started libpod-conmon-7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101.scope.
Dec 13 03:43:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f9cbd1dfda18d741e1808b30490557ec7b1e4b99c626a07b2d4859507ee681/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f9cbd1dfda18d741e1808b30490557ec7b1e4b99c626a07b2d4859507ee681/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f9cbd1dfda18d741e1808b30490557ec7b1e4b99c626a07b2d4859507ee681/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:30 compute-0 podman[75874]: 2025-12-13 03:43:30.878228606 +0000 UTC m=+0.022611723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:30 compute-0 podman[75874]: 2025-12-13 03:43:30.978571679 +0000 UTC m=+0.122954786 container init 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:30 compute-0 podman[75874]: 2025-12-13 03:43:30.983339215 +0000 UTC m=+0.127722292 container start 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:30 compute-0 podman[75874]: 2025-12-13 03:43:30.988464001 +0000 UTC m=+0.132847088 container attach 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1638701264' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Dec 13 03:43:31 compute-0 ceph-mon[75071]: mgrmap e5: compute-0.gsxkyu(active, since 5s)
Dec 13 03:43:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1343384608' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 03:43:31 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'crash'
Dec 13 03:43:31 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'dashboard'
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'devicehealth'
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 03:43:32 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 03:43:32 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 03:43:32 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]:   from numpy import show_config as show_numpy_config
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'influx'
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'insights'
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'iostat'
Dec 13 03:43:32 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'k8sevents'
Dec 13 03:43:33 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'localpool'
Dec 13 03:43:33 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 03:43:33 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'mirroring'
Dec 13 03:43:33 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'nfs'
Dec 13 03:43:33 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'orchestrator'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'osd_support'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'progress'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'prometheus'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rbd_support'
Dec 13 03:43:34 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rgw'
Dec 13 03:43:35 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'rook'
Dec 13 03:43:35 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'selftest'
Dec 13 03:43:35 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'smb'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'snap_schedule'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'stats'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'status'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'telegraf'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'telemetry'
Dec 13 03:43:36 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'test_orchestrator'
Dec 13 03:43:37 compute-0 ceph-mgr[75360]: mgr[py] Loading python module 'volumes'
Dec 13 03:43:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Active manager daemon compute-0.gsxkyu restarted
Dec 13 03:43:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Dec 13 03:43:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:43:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.gsxkyu
Dec 13 03:43:37 compute-0 ceph-mgr[75360]: ms_deliver_dispatch: unhandled message 0x55af297e2000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: mgr handle_mgr_map Activating!
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: mgr handle_mgr_map I am now activating
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.gsxkyu(active, starting, since 3s)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e1 all = 1
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} v 0)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: balancer
Dec 13 03:43:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Manager daemon compute-0.gsxkyu is now available
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Starting
Dec 13 03:43:40 compute-0 ceph-mon[75071]: Active manager daemon compute-0.gsxkyu restarted
Dec 13 03:43:40 compute-0 ceph-mon[75071]: Activating manager daemon compute-0.gsxkyu
Dec 13 03:43:40 compute-0 ceph-mon[75071]: osdmap e2: 0 total, 0 up, 0 in
Dec 13 03:43:40 compute-0 ceph-mon[75071]: mgrmap e6: compute-0.gsxkyu(active, starting, since 3s)
Dec 13 03:43:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr metadata", "who": "compute-0.gsxkyu", "id": "compute-0.gsxkyu"} : dispatch
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:43:40
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:43:40 compute-0 ceph-mgr[75360]: [balancer INFO root] No pools available
Dec 13 03:43:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Dec 13 03:43:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.gsxkyu(active, since 4s)
Dec 13 03:43:41 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 03:43:41 compute-0 ceph-mon[75071]: Manager daemon compute-0.gsxkyu is now available
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Dec 13 03:43:42 compute-0 pedantic_mclaren[75896]: {
Dec 13 03:43:42 compute-0 pedantic_mclaren[75896]:     "mgrmap_epoch": 7,
Dec 13 03:43:42 compute-0 pedantic_mclaren[75896]:     "initialized": true
Dec 13 03:43:42 compute-0 pedantic_mclaren[75896]: }
Dec 13 03:43:42 compute-0 systemd[1]: libpod-7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101.scope: Deactivated successfully.
Dec 13 03:43:42 compute-0 podman[75874]: 2025-12-13 03:43:42.04886268 +0000 UTC m=+11.193245787 container died 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Dec 13 03:43:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f5f9cbd1dfda18d741e1808b30490557ec7b1e4b99c626a07b2d4859507ee681-merged.mount: Deactivated successfully.
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: cephadm
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: crash
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: devicehealth
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Starting
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: iostat
Dec 13 03:43:42 compute-0 podman[75874]: 2025-12-13 03:43:42.252178199 +0000 UTC m=+11.396561286 container remove 7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101 (image=quay.io/ceph/ceph:v20, name=pedantic_mclaren, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: nfs
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: orchestrator
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: pg_autoscaler
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:42 compute-0 systemd[1]: libpod-conmon-7cbe792f72573ff149dd1b035cfc790690eafb188877b04edf8c8585aa389101.scope: Deactivated successfully.
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: progress
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [progress INFO root] Loading...
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [progress INFO root] No stored events to load
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [progress INFO root] Loaded [] historic events
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [progress INFO root] Loaded OSDMap, ready.
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] recovery thread starting
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] starting setup
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} : dispatch
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: rbd_support
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: status
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: telemetry
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] PerfHandler: starting
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TaskHandler: starting
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} : dispatch
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] setup complete
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr load Constructed class from module: volumes
Dec 13 03:43:42 compute-0 podman[75994]: 2025-12-13 03:43:42.340819839 +0000 UTC m=+0.055755726 container create 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:43:42 compute-0 systemd[1]: Started libpod-conmon-9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9.scope.
Dec 13 03:43:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357ca01400e20720286a4320621f169189f6768a531461ed04023766ec6793a/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357ca01400e20720286a4320621f169189f6768a531461ed04023766ec6793a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5357ca01400e20720286a4320621f169189f6768a531461ed04023766ec6793a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:42 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:42 compute-0 podman[75994]: 2025-12-13 03:43:42.320450721 +0000 UTC m=+0.035386628 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:42 compute-0 podman[75994]: 2025-12-13 03:43:42.4228178 +0000 UTC m=+0.137753707 container init 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:43:42 compute-0 podman[75994]: 2025-12-13 03:43:42.428526252 +0000 UTC m=+0.143462139 container start 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:43:42 compute-0 podman[75994]: 2025-12-13 03:43:42.433117963 +0000 UTC m=+0.148053850 container attach 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Dec 13 03:43:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1750433974' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: [cephadm INFO cherrypy.error] [13/Dec/2025:03:43:43] ENGINE Bus STARTING
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : [13/Dec/2025:03:43:43] ENGINE Bus STARTING
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: [cephadm INFO cherrypy.error] [13/Dec/2025:03:43:43] ENGINE Serving on https://192.168.122.100:7150
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : [13/Dec/2025:03:43:43] ENGINE Serving on https://192.168.122.100:7150
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: [cephadm INFO cherrypy.error] [13/Dec/2025:03:43:43] ENGINE Client ('192.168.122.100', 49678) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : [13/Dec/2025:03:43:43] ENGINE Client ('192.168.122.100', 49678) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: [cephadm INFO cherrypy.error] [13/Dec/2025:03:43:43] ENGINE Serving on http://192.168.122.100:8765
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : [13/Dec/2025:03:43:43] ENGINE Serving on http://192.168.122.100:8765
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: [cephadm INFO cherrypy.error] [13/Dec/2025:03:43:43] ENGINE Bus STARTED
Dec 13 03:43:43 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : [13/Dec/2025:03:43:43] ENGINE Bus STARTED
Dec 13 03:43:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:43:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: mgrmap e7: compute-0.gsxkyu(active, since 4s)
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/mirror_snapshot_schedule"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.gsxkyu/trash_purge_schedule"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1750433974' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Dec 13 03:43:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1750433974' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 03:43:43 compute-0 dazzling_wescoff[76059]: module 'orchestrator' is already enabled (always-on)
Dec 13 03:43:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.gsxkyu(active, since 6s)
Dec 13 03:43:43 compute-0 systemd[1]: libpod-9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9.scope: Deactivated successfully.
Dec 13 03:43:43 compute-0 podman[75994]: 2025-12-13 03:43:43.533856775 +0000 UTC m=+1.248792662 container died 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-5357ca01400e20720286a4320621f169189f6768a531461ed04023766ec6793a-merged.mount: Deactivated successfully.
Dec 13 03:43:43 compute-0 podman[75994]: 2025-12-13 03:43:43.576205199 +0000 UTC m=+1.291141086 container remove 9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9 (image=quay.io/ceph/ceph:v20, name=dazzling_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:43:43 compute-0 systemd[1]: libpod-conmon-9ae7816ea21b67b45a82b9bb6161002589bec1fba351b9bf9acccebaf2ec68c9.scope: Deactivated successfully.
Dec 13 03:43:43 compute-0 podman[76121]: 2025-12-13 03:43:43.657640603 +0000 UTC m=+0.058454672 container create 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:43 compute-0 podman[76121]: 2025-12-13 03:43:43.628033082 +0000 UTC m=+0.028847171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:43 compute-0 systemd[1]: Started libpod-conmon-01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d.scope.
Dec 13 03:43:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ca597fcb83f9067384d3d5c59ae73ad270c7f31009423258649c3d6dc27c4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ca597fcb83f9067384d3d5c59ae73ad270c7f31009423258649c3d6dc27c4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ca597fcb83f9067384d3d5c59ae73ad270c7f31009423258649c3d6dc27c4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:43 compute-0 podman[76121]: 2025-12-13 03:43:43.799703213 +0000 UTC m=+0.200517302 container init 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:43:43 compute-0 podman[76121]: 2025-12-13 03:43:43.80454114 +0000 UTC m=+0.205355209 container start 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:43 compute-0 podman[76121]: 2025-12-13 03:43:43.809692246 +0000 UTC m=+0.210506365 container attach 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Dec 13 03:43:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:43:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:44 compute-0 systemd[1]: libpod-01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d.scope: Deactivated successfully.
Dec 13 03:43:44 compute-0 podman[76121]: 2025-12-13 03:43:44.258474704 +0000 UTC m=+0.659288773 container died 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-72ca597fcb83f9067384d3d5c59ae73ad270c7f31009423258649c3d6dc27c4b-merged.mount: Deactivated successfully.
Dec 13 03:43:44 compute-0 podman[76121]: 2025-12-13 03:43:44.295435195 +0000 UTC m=+0.696249264 container remove 01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d (image=quay.io/ceph/ceph:v20, name=amazing_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:43:44 compute-0 systemd[1]: libpod-conmon-01fbc3902f91a05b3e9ab74691cd70e8b37339e79d0667570eb7cc4016e2816d.scope: Deactivated successfully.
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.354195415 +0000 UTC m=+0.039958486 container create 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:44 compute-0 systemd[1]: Started libpod-conmon-95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934.scope.
Dec 13 03:43:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a5cac567122596e8c38fdd3b302403c1980d9bb2a01ba72ea743b061841f41/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a5cac567122596e8c38fdd3b302403c1980d9bb2a01ba72ea743b061841f41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/92a5cac567122596e8c38fdd3b302403c1980d9bb2a01ba72ea743b061841f41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.41979392 +0000 UTC m=+0.105556991 container init 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.425384359 +0000 UTC m=+0.111147420 container start 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.428199729 +0000 UTC m=+0.113962820 container attach 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.33610172 +0000 UTC m=+0.021864811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:44 compute-0 ceph-mon[75071]: Found migration_current of "None". Setting to last migration.
Dec 13 03:43:44 compute-0 ceph-mon[75071]: [13/Dec/2025:03:43:43] ENGINE Bus STARTING
Dec 13 03:43:44 compute-0 ceph-mon[75071]: [13/Dec/2025:03:43:43] ENGINE Serving on https://192.168.122.100:7150
Dec 13 03:43:44 compute-0 ceph-mon[75071]: [13/Dec/2025:03:43:43] ENGINE Client ('192.168.122.100', 49678) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Dec 13 03:43:44 compute-0 ceph-mon[75071]: [13/Dec/2025:03:43:43] ENGINE Serving on http://192.168.122.100:8765
Dec 13 03:43:44 compute-0 ceph-mon[75071]: [13/Dec/2025:03:43:43] ENGINE Bus STARTED
Dec 13 03:43:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1750433974' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Dec 13 03:43:44 compute-0 ceph-mon[75071]: mgrmap e8: compute-0.gsxkyu(active, since 6s)
Dec 13 03:43:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.gsxkyu(active, since 7s)
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Dec 13 03:43:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: [cephadm INFO root] Set ssh ssh_user
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Dec 13 03:43:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Dec 13 03:43:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: [cephadm INFO root] Set ssh ssh_config
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Dec 13 03:43:44 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Dec 13 03:43:44 compute-0 interesting_fermi[76193]: ssh user set to ceph-admin. sudo will be used
Dec 13 03:43:44 compute-0 systemd[1]: libpod-95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934.scope: Deactivated successfully.
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.906326231 +0000 UTC m=+0.592089292 container died 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:43:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-92a5cac567122596e8c38fdd3b302403c1980d9bb2a01ba72ea743b061841f41-merged.mount: Deactivated successfully.
Dec 13 03:43:44 compute-0 podman[76176]: 2025-12-13 03:43:44.944745693 +0000 UTC m=+0.630508754 container remove 95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934 (image=quay.io/ceph/ceph:v20, name=interesting_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:43:44 compute-0 systemd[1]: libpod-conmon-95fee9ef8be3be76352a022e6eb3f860cce9288105e0ee4e3e04b07ee8739934.scope: Deactivated successfully.
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.006707285 +0000 UTC m=+0.042838990 container create 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:45 compute-0 systemd[1]: Started libpod-conmon-3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664.scope.
Dec 13 03:43:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.070281651 +0000 UTC m=+0.106413376 container init 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.076772967 +0000 UTC m=+0.112904682 container start 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.080511032 +0000 UTC m=+0.116642747 container attach 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:44.986558642 +0000 UTC m=+0.022690407 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019898488 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:43:45 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Dec 13 03:43:45 compute-0 ceph-mon[75071]: from='client.14136 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:45 compute-0 ceph-mon[75071]: mgrmap e9: compute-0.gsxkyu(active, since 7s)
Dec 13 03:43:45 compute-0 ceph-mon[75071]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:45 compute-0 ceph-mon[75071]: Set ssh ssh_user
Dec 13 03:43:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:45 compute-0 ceph-mon[75071]: Set ssh ssh_config
Dec 13 03:43:45 compute-0 ceph-mon[75071]: ssh user set to ceph-admin. sudo will be used
Dec 13 03:43:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:45 compute-0 ceph-mgr[75360]: [cephadm INFO root] Set ssh ssh_identity_key
Dec 13 03:43:45 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Dec 13 03:43:45 compute-0 ceph-mgr[75360]: [cephadm INFO root] Set ssh private key
Dec 13 03:43:45 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Set ssh private key
Dec 13 03:43:45 compute-0 systemd[1]: libpod-3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664.scope: Deactivated successfully.
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.583905703 +0000 UTC m=+0.620037408 container died 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:43:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ea1e7d9d6f6436831b62ecb4e5d5ca5bd5ada646c6f1cba5ef4d24d660f3392-merged.mount: Deactivated successfully.
Dec 13 03:43:45 compute-0 podman[76230]: 2025-12-13 03:43:45.616615842 +0000 UTC m=+0.652747547 container remove 3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664 (image=quay.io/ceph/ceph:v20, name=focused_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:43:45 compute-0 systemd[1]: libpod-conmon-3b72b852cbc88839a7d9b12b686ecb2d11e7071ded9c182ce66db23a01060664.scope: Deactivated successfully.
Dec 13 03:43:45 compute-0 podman[76282]: 2025-12-13 03:43:45.67069309 +0000 UTC m=+0.036017145 container create 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:43:45 compute-0 systemd[1]: Started libpod-conmon-2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372.scope.
Dec 13 03:43:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:45 compute-0 podman[76282]: 2025-12-13 03:43:45.72591918 +0000 UTC m=+0.091243245 container init 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:45 compute-0 podman[76282]: 2025-12-13 03:43:45.734790932 +0000 UTC m=+0.100114967 container start 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:43:45 compute-0 podman[76282]: 2025-12-13 03:43:45.738462886 +0000 UTC m=+0.103786921 container attach 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:43:45 compute-0 podman[76282]: 2025-12-13 03:43:45.654478149 +0000 UTC m=+0.019802204 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Dec 13 03:43:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: [cephadm INFO root] Set ssh ssh_identity_pub
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Dec 13 03:43:46 compute-0 systemd[1]: libpod-2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372.scope: Deactivated successfully.
Dec 13 03:43:46 compute-0 podman[76282]: 2025-12-13 03:43:46.135537314 +0000 UTC m=+0.500861349 container died 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-84007a31f81248b395a86c726d3dd6d0e1e643aba63c40661d60a90e9bddcfc3-merged.mount: Deactivated successfully.
Dec 13 03:43:46 compute-0 podman[76282]: 2025-12-13 03:43:46.176092297 +0000 UTC m=+0.541416332 container remove 2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372 (image=quay.io/ceph/ceph:v20, name=pedantic_ardinghelli, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:43:46 compute-0 systemd[1]: libpod-conmon-2d51347aec7cd9c6ed1265e062daeb9321d1bd574bf456ad8c2774f2bcbc7372.scope: Deactivated successfully.
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.233379046 +0000 UTC m=+0.036978473 container create 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:46 compute-0 systemd[1]: Started libpod-conmon-715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765.scope.
Dec 13 03:43:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1974e1d176a3d0a656ec9c2be6afc93a3da2072bff903a9db4dbda3b607fcbd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1974e1d176a3d0a656ec9c2be6afc93a3da2072bff903a9db4dbda3b607fcbd0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1974e1d176a3d0a656ec9c2be6afc93a3da2072bff903a9db4dbda3b607fcbd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.305048973 +0000 UTC m=+0.108648420 container init 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.310833027 +0000 UTC m=+0.114432454 container start 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.218103381 +0000 UTC m=+0.021702838 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.314413189 +0000 UTC m=+0.118012646 container attach 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:46 compute-0 ceph-mon[75071]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:46 compute-0 ceph-mon[75071]: Set ssh ssh_identity_key
Dec 13 03:43:46 compute-0 ceph-mon[75071]: Set ssh private key
Dec 13 03:43:46 compute-0 ceph-mon[75071]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:46 compute-0 ceph-mon[75071]: Set ssh ssh_identity_pub
Dec 13 03:43:46 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:46 compute-0 funny_shirley[76353]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCrbmKlS0uF4Nuzl2xZW5eIQisO0fC6w3A9H6aWqk7T7xmO9Nbfw3kaf19zyK4BFEhkU1o3PNZJ8p/mQGVnqJBFn/p3LJKYN9gTzuLnvxA7qyAZ4OBqvrQjZsGjD6zolTpHTCkUN88DWWT348V6jeJUJAw2i+9hzbCG2EO1K+21gyukz9Xy9cY8vO+aft5fIIJ0whGPq7sVf1467J/aFk+vCUBjDC9+Nh52hWweV9WSUgKmbw6DnvZPnPIdLJxB8eApICdpu8B0iX6T00m/MdC8oCB9fx2r54w+aPqWu8Ec5pO7QcGDxD3BJHn8mvLb1B7FfPAG4q16JgbheqU7f1RlzLJD3iJ7NatyAxFpsRIPqDo0XddoTwpcMCzQgj0+TAXaYzbtpFxpv2nkqNCohAkddRzN0yssZ6ektVKXfl1or053MN39okVZ4gMc1YaTZ4ht9S5wv8laWw5MtwPywJbILpRkHGbLmqJAkRzj/KtQ0SvebBJ69aZl+6AMLjgxL7s= zuul@controller
Dec 13 03:43:46 compute-0 systemd[1]: libpod-715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765.scope: Deactivated successfully.
Dec 13 03:43:46 compute-0 conmon[76353]: conmon 715a49afee32ea384d50 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765.scope/container/memory.events
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.744944038 +0000 UTC m=+0.548543475 container died 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:43:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-1974e1d176a3d0a656ec9c2be6afc93a3da2072bff903a9db4dbda3b607fcbd0-merged.mount: Deactivated successfully.
Dec 13 03:43:46 compute-0 podman[76336]: 2025-12-13 03:43:46.786964572 +0000 UTC m=+0.590563999 container remove 715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765 (image=quay.io/ceph/ceph:v20, name=funny_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:46 compute-0 systemd[1]: libpod-conmon-715a49afee32ea384d50afb4a3a7b8c1e99f02f6d0f9719b0b58bd3cdfb49765.scope: Deactivated successfully.
Dec 13 03:43:46 compute-0 podman[76392]: 2025-12-13 03:43:46.856660384 +0000 UTC m=+0.048183311 container create b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:43:46 compute-0 systemd[1]: Started libpod-conmon-b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a.scope.
Dec 13 03:43:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6983612eba75788ece1ed60b4cb83eafd21a9b04853cbea76e34568286154055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6983612eba75788ece1ed60b4cb83eafd21a9b04853cbea76e34568286154055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6983612eba75788ece1ed60b4cb83eafd21a9b04853cbea76e34568286154055/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:46 compute-0 podman[76392]: 2025-12-13 03:43:46.836442369 +0000 UTC m=+0.027965316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:46 compute-0 podman[76392]: 2025-12-13 03:43:46.934502557 +0000 UTC m=+0.126025504 container init b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:43:46 compute-0 podman[76392]: 2025-12-13 03:43:46.941504485 +0000 UTC m=+0.133027422 container start b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:46 compute-0 podman[76392]: 2025-12-13 03:43:46.946005784 +0000 UTC m=+0.137528741 container attach b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:43:47 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:47 compute-0 ceph-mon[75071]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:47 compute-0 sshd-session[76434]: Accepted publickey for ceph-admin from 192.168.122.100 port 42346 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:47 compute-0 systemd-logind[796]: New session 21 of user ceph-admin.
Dec 13 03:43:47 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Dec 13 03:43:47 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Dec 13 03:43:47 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Dec 13 03:43:47 compute-0 systemd[1]: Starting User Manager for UID 42477...
Dec 13 03:43:47 compute-0 systemd[76438]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:47 compute-0 systemd[76438]: Queued start job for default target Main User Target.
Dec 13 03:43:47 compute-0 sshd-session[76451]: Accepted publickey for ceph-admin from 192.168.122.100 port 42350 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:47 compute-0 systemd[76438]: Created slice User Application Slice.
Dec 13 03:43:47 compute-0 systemd[76438]: Started Mark boot as successful after the user session has run 2 minutes.
Dec 13 03:43:47 compute-0 systemd[76438]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 03:43:47 compute-0 systemd[76438]: Reached target Paths.
Dec 13 03:43:47 compute-0 systemd[76438]: Reached target Timers.
Dec 13 03:43:47 compute-0 systemd[76438]: Starting D-Bus User Message Bus Socket...
Dec 13 03:43:47 compute-0 systemd[76438]: Starting Create User's Volatile Files and Directories...
Dec 13 03:43:47 compute-0 systemd-logind[796]: New session 23 of user ceph-admin.
Dec 13 03:43:47 compute-0 systemd[76438]: Listening on D-Bus User Message Bus Socket.
Dec 13 03:43:47 compute-0 systemd[76438]: Finished Create User's Volatile Files and Directories.
Dec 13 03:43:47 compute-0 systemd[76438]: Reached target Sockets.
Dec 13 03:43:47 compute-0 systemd[76438]: Reached target Basic System.
Dec 13 03:43:47 compute-0 systemd[76438]: Reached target Main User Target.
Dec 13 03:43:47 compute-0 systemd[76438]: Startup finished in 122ms.
Dec 13 03:43:47 compute-0 systemd[1]: Started User Manager for UID 42477.
Dec 13 03:43:47 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Dec 13 03:43:47 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Dec 13 03:43:47 compute-0 sshd-session[76434]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:47 compute-0 sshd-session[76451]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:47 compute-0 sudo[76458]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:43:47 compute-0 sudo[76458]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:47 compute-0 sudo[76458]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:48 compute-0 sshd-session[76483]: Accepted publickey for ceph-admin from 192.168.122.100 port 42352 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:48 compute-0 systemd-logind[796]: New session 24 of user ceph-admin.
Dec 13 03:43:48 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Dec 13 03:43:48 compute-0 sshd-session[76483]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:48 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:48 compute-0 sudo[76487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 03:43:48 compute-0 sudo[76487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:48 compute-0 sudo[76487]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:48 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:48 compute-0 sshd-session[76512]: Accepted publickey for ceph-admin from 192.168.122.100 port 42356 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:48 compute-0 systemd-logind[796]: New session 25 of user ceph-admin.
Dec 13 03:43:48 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Dec 13 03:43:48 compute-0 sshd-session[76512]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:48 compute-0 sudo[76516]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 13 03:43:48 compute-0 sudo[76516]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:48 compute-0 sudo[76516]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:48 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Dec 13 03:43:48 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Dec 13 03:43:48 compute-0 ceph-mon[75071]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:48 compute-0 sshd-session[76541]: Accepted publickey for ceph-admin from 192.168.122.100 port 42368 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:48 compute-0 systemd-logind[796]: New session 26 of user ceph-admin.
Dec 13 03:43:48 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Dec 13 03:43:48 compute-0 sshd-session[76541]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:48 compute-0 sudo[76545]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:48 compute-0 sudo[76545]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:48 compute-0 sudo[76545]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:49 compute-0 sshd-session[76570]: Accepted publickey for ceph-admin from 192.168.122.100 port 42384 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:49 compute-0 systemd-logind[796]: New session 27 of user ceph-admin.
Dec 13 03:43:49 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Dec 13 03:43:49 compute-0 sshd-session[76570]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:49 compute-0 sudo[76574]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:49 compute-0 sudo[76574]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:49 compute-0 sudo[76574]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:49 compute-0 sshd-session[76599]: Accepted publickey for ceph-admin from 192.168.122.100 port 42396 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:49 compute-0 systemd-logind[796]: New session 28 of user ceph-admin.
Dec 13 03:43:49 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Dec 13 03:43:49 compute-0 sshd-session[76599]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:49 compute-0 sudo[76603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 13 03:43:49 compute-0 sudo[76603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:49 compute-0 sudo[76603]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:49 compute-0 ceph-mon[75071]: Deploying cephadm binary to compute-0
Dec 13 03:43:49 compute-0 sshd-session[76628]: Accepted publickey for ceph-admin from 192.168.122.100 port 42398 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:49 compute-0 systemd-logind[796]: New session 29 of user ceph-admin.
Dec 13 03:43:49 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Dec 13 03:43:49 compute-0 sshd-session[76628]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:49 compute-0 sudo[76632]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:43:49 compute-0 sudo[76632]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:49 compute-0 sudo[76632]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:50 compute-0 sshd-session[76657]: Accepted publickey for ceph-admin from 192.168.122.100 port 42406 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:50 compute-0 systemd-logind[796]: New session 30 of user ceph-admin.
Dec 13 03:43:50 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Dec 13 03:43:50 compute-0 sshd-session[76657]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:50 compute-0 sudo[76661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Dec 13 03:43:50 compute-0 sudo[76661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:50 compute-0 sudo[76661]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:50 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052529 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:43:50 compute-0 sshd-session[76686]: Accepted publickey for ceph-admin from 192.168.122.100 port 42416 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:50 compute-0 systemd-logind[796]: New session 31 of user ceph-admin.
Dec 13 03:43:50 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Dec 13 03:43:50 compute-0 sshd-session[76686]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:50 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:51 compute-0 sshd-session[76713]: Accepted publickey for ceph-admin from 192.168.122.100 port 42420 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:51 compute-0 systemd-logind[796]: New session 32 of user ceph-admin.
Dec 13 03:43:51 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Dec 13 03:43:51 compute-0 sshd-session[76713]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:51 compute-0 sudo[76717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Dec 13 03:43:51 compute-0 sudo[76717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:51 compute-0 sudo[76717]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:52 compute-0 sshd-session[76742]: Accepted publickey for ceph-admin from 192.168.122.100 port 42436 ssh2: RSA SHA256:t1oFLqcRkr4wIaeTPuDf19IAV8eWhJIz+a0mil31Dh0
Dec 13 03:43:52 compute-0 systemd-logind[796]: New session 33 of user ceph-admin.
Dec 13 03:43:52 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Dec 13 03:43:52 compute-0 sshd-session[76742]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Dec 13 03:43:52 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:52 compute-0 sudo[76746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 03:43:52 compute-0 sudo[76746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:52 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:52 compute-0 sudo[76746]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:43:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:52 compute-0 ceph-mgr[75360]: [cephadm INFO root] Added host compute-0
Dec 13 03:43:52 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 03:43:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:43:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:52 compute-0 gifted_hugle[76408]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 03:43:52 compute-0 systemd[1]: libpod-b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a.scope: Deactivated successfully.
Dec 13 03:43:52 compute-0 podman[76392]: 2025-12-13 03:43:52.656368304 +0000 UTC m=+5.847891231 container died b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:43:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-6983612eba75788ece1ed60b4cb83eafd21a9b04853cbea76e34568286154055-merged.mount: Deactivated successfully.
Dec 13 03:43:52 compute-0 sudo[76791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:43:52 compute-0 sudo[76791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:52 compute-0 sudo[76791]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:52 compute-0 podman[76392]: 2025-12-13 03:43:52.700651773 +0000 UTC m=+5.892174700 container remove b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a (image=quay.io/ceph/ceph:v20, name=gifted_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:43:52 compute-0 systemd[1]: libpod-conmon-b230217b34f6cdaf70c3d37469e8b3a18b8d098608f6f8f2e29f87d71819b61a.scope: Deactivated successfully.
Dec 13 03:43:52 compute-0 sudo[76828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Dec 13 03:43:52 compute-0 sudo[76828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:52 compute-0 podman[76833]: 2025-12-13 03:43:52.76527148 +0000 UTC m=+0.043293051 container create d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:52 compute-0 systemd[1]: Started libpod-conmon-d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2.scope.
Dec 13 03:43:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b3d3663885d8777fe80043d8afef1c2649b6d13e215ee0d2bb1ffc52dab039/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:52 compute-0 podman[76833]: 2025-12-13 03:43:52.745777166 +0000 UTC m=+0.023798767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b3d3663885d8777fe80043d8afef1c2649b6d13e215ee0d2bb1ffc52dab039/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9b3d3663885d8777fe80043d8afef1c2649b6d13e215ee0d2bb1ffc52dab039/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:52 compute-0 podman[76833]: 2025-12-13 03:43:52.858702337 +0000 UTC m=+0.136723928 container init d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:43:52 compute-0 podman[76833]: 2025-12-13 03:43:52.865816908 +0000 UTC m=+0.143838479 container start d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:43:52 compute-0 podman[76833]: 2025-12-13 03:43:52.868792493 +0000 UTC m=+0.146814064 container attach d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:53 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:53 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mon spec with placement count:5
Dec 13 03:43:53 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Dec 13 03:43:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 03:43:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:53 compute-0 confident_jemison[76869]: Scheduled mon update...
Dec 13 03:43:53 compute-0 systemd[1]: libpod-d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2.scope: Deactivated successfully.
Dec 13 03:43:53 compute-0 conmon[76869]: conmon d768b6b6dd40fa597785 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2.scope/container/memory.events
Dec 13 03:43:53 compute-0 podman[76833]: 2025-12-13 03:43:53.336480428 +0000 UTC m=+0.614501999 container died d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:43:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-d9b3d3663885d8777fe80043d8afef1c2649b6d13e215ee0d2bb1ffc52dab039-merged.mount: Deactivated successfully.
Dec 13 03:43:53 compute-0 podman[76833]: 2025-12-13 03:43:53.370512215 +0000 UTC m=+0.648533786 container remove d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2 (image=quay.io/ceph/ceph:v20, name=confident_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:43:53 compute-0 systemd[1]: libpod-conmon-d768b6b6dd40fa597785d963ed23c8a49b9346d19864524904892ba96d33eed2.scope: Deactivated successfully.
Dec 13 03:43:53 compute-0 podman[76932]: 2025-12-13 03:43:53.412157749 +0000 UTC m=+0.020639687 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:54 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:54 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:56 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:56 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:57 compute-0 podman[76932]: 2025-12-13 03:43:57.542206457 +0000 UTC m=+4.150688395 container create 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:43:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054700 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:43:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:57 compute-0 ceph-mon[75071]: Added host compute-0
Dec 13 03:43:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:43:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:57 compute-0 podman[76888]: 2025-12-13 03:43:57.600575755 +0000 UTC m=+4.621177448 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:57 compute-0 systemd[1]: Started libpod-conmon-3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f.scope.
Dec 13 03:43:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c918b32fc9c8b27cafcd1a903954d53b7797123426aa76966f1327ac4e694b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c918b32fc9c8b27cafcd1a903954d53b7797123426aa76966f1327ac4e694b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c918b32fc9c8b27cafcd1a903954d53b7797123426aa76966f1327ac4e694b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:58 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:43:58 compute-0 podman[76932]: 2025-12-13 03:43:58.296509269 +0000 UTC m=+4.904991237 container init 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:43:58 compute-0 podman[76932]: 2025-12-13 03:43:58.303487997 +0000 UTC m=+4.911969935 container start 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:58 compute-0 ceph-mgr[75360]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Dec 13 03:43:58 compute-0 podman[76932]: 2025-12-13 03:43:58.615144257 +0000 UTC m=+5.223626205 container attach 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:43:58 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:58 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mgr spec with placement count:2
Dec 13 03:43:58 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Dec 13 03:43:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:43:58 compute-0 ceph-mon[75071]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:58 compute-0 ceph-mon[75071]: Saving service mon spec with placement count:5
Dec 13 03:43:58 compute-0 podman[76966]: 2025-12-13 03:43:58.789602246 +0000 UTC m=+1.121636286 container create 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:43:58 compute-0 podman[76966]: 2025-12-13 03:43:58.703236541 +0000 UTC m=+1.035270601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:58 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:58 compute-0 fervent_cartwright[76956]: Scheduled mgr update...
Dec 13 03:43:58 compute-0 podman[76932]: 2025-12-13 03:43:58.872698618 +0000 UTC m=+5.481180556 container died 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle)
Dec 13 03:43:58 compute-0 systemd[1]: Started libpod-conmon-93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068.scope.
Dec 13 03:43:58 compute-0 systemd[1]: libpod-3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f.scope: Deactivated successfully.
Dec 13 03:43:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6c918b32fc9c8b27cafcd1a903954d53b7797123426aa76966f1327ac4e694b-merged.mount: Deactivated successfully.
Dec 13 03:43:58 compute-0 podman[76966]: 2025-12-13 03:43:58.923168413 +0000 UTC m=+1.255202453 container init 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:43:58 compute-0 podman[76932]: 2025-12-13 03:43:58.927718752 +0000 UTC m=+5.536200690 container remove 3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f (image=quay.io/ceph/ceph:v20, name=fervent_cartwright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:43:58 compute-0 podman[76966]: 2025-12-13 03:43:58.929914815 +0000 UTC m=+1.261948855 container start 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:43:58 compute-0 podman[76966]: 2025-12-13 03:43:58.934018501 +0000 UTC m=+1.266052531 container attach 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:43:58 compute-0 systemd[1]: libpod-conmon-3128b1182d21dbfa801deb209c1b40a5e7c52eb4b327da2f5b8646ddacae813f.scope: Deactivated successfully.
Dec 13 03:43:58 compute-0 podman[77022]: 2025-12-13 03:43:58.991106195 +0000 UTC m=+0.044053264 container create c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:43:59 compute-0 nostalgic_chebyshev[77006]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Dec 13 03:43:59 compute-0 podman[76966]: 2025-12-13 03:43:59.024534925 +0000 UTC m=+1.356568965 container died 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:43:59 compute-0 systemd[1]: Started libpod-conmon-c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d.scope.
Dec 13 03:43:59 compute-0 systemd[1]: libpod-93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068.scope: Deactivated successfully.
Dec 13 03:43:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bbaed902de498d3e483180f43bb148a60927acd68d2af2eb826ea60b9b39c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bbaed902de498d3e483180f43bb148a60927acd68d2af2eb826ea60b9b39c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49bbaed902de498d3e483180f43bb148a60927acd68d2af2eb826ea60b9b39c4/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:43:59 compute-0 podman[77022]: 2025-12-13 03:43:58.970002854 +0000 UTC m=+0.022949943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:43:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-280ae18e6f4ebe29028cd2b9664f09ba6f33ca3814464a8f04f851f8554263f6-merged.mount: Deactivated successfully.
Dec 13 03:43:59 compute-0 podman[76966]: 2025-12-13 03:43:59.637071307 +0000 UTC m=+1.969105347 container remove 93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068 (image=quay.io/ceph/ceph:v20, name=nostalgic_chebyshev, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:43:59 compute-0 sudo[76828]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Dec 13 03:43:59 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:59 compute-0 systemd[1]: libpod-conmon-93bdd4a5883ab1b4157c4ef361aa435631fb47cf400e21e06b45aac201128068.scope: Deactivated successfully.
Dec 13 03:43:59 compute-0 podman[77022]: 2025-12-13 03:43:59.697694981 +0000 UTC m=+0.750642050 container init c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:43:59 compute-0 podman[77022]: 2025-12-13 03:43:59.704133304 +0000 UTC m=+0.757080373 container start c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:43:59 compute-0 podman[77022]: 2025-12-13 03:43:59.707729206 +0000 UTC m=+0.760676275 container attach c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:43:59 compute-0 sudo[77055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:43:59 compute-0 sudo[77055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:59 compute-0 sudo[77055]: pam_unix(sudo:session): session closed for user root
Dec 13 03:43:59 compute-0 sudo[77080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 03:43:59 compute-0 sudo[77080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:43:59 compute-0 ceph-mon[75071]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:43:59 compute-0 ceph-mon[75071]: Saving service mgr spec with placement count:2
Dec 13 03:43:59 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:43:59 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service crash spec with placement *
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Dec 13 03:44:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 03:44:00 compute-0 sudo[77080]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:00 compute-0 laughing_faraday[77044]: Scheduled crash update...
Dec 13 03:44:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:00 compute-0 podman[77022]: 2025-12-13 03:44:00.149986388 +0000 UTC m=+1.202933457 container died c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:44:00 compute-0 systemd[1]: libpod-c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d.scope: Deactivated successfully.
Dec 13 03:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-49bbaed902de498d3e483180f43bb148a60927acd68d2af2eb826ea60b9b39c4-merged.mount: Deactivated successfully.
Dec 13 03:44:00 compute-0 podman[77022]: 2025-12-13 03:44:00.186996011 +0000 UTC m=+1.239943080 container remove c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d (image=quay.io/ceph/ceph:v20, name=laughing_faraday, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:00 compute-0 sudo[77146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:00 compute-0 systemd[1]: libpod-conmon-c9884ba50ed03f5df0b96859440213d73fda981b77d8c61547350b001b7ebb1d.scope: Deactivated successfully.
Dec 13 03:44:00 compute-0 sudo[77146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:00 compute-0 sudo[77146]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:00 compute-0 sudo[77184]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:00 compute-0 sudo[77184]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.266274914 +0000 UTC m=+0.055251892 container create 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:00 compute-0 systemd[1]: Started libpod-conmon-2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709.scope.
Dec 13 03:44:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04cfda457c3ee6f758ffee52011d242535fc1d0772aec7dd940f72f984ea43/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04cfda457c3ee6f758ffee52011d242535fc1d0772aec7dd940f72f984ea43/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c04cfda457c3ee6f758ffee52011d242535fc1d0772aec7dd940f72f984ea43/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.247071078 +0000 UTC m=+0.036048076 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.343383756 +0000 UTC m=+0.132360764 container init 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.349801259 +0000 UTC m=+0.138778237 container start 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.356072037 +0000 UTC m=+0.145049095 container attach 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Dec 13 03:44:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 03:44:00 compute-0 podman[77287]: 2025-12-13 03:44:00.65235715 +0000 UTC m=+0.055142299 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Dec 13 03:44:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2370690712' entity='client.admin' 
Dec 13 03:44:00 compute-0 systemd[1]: libpod-2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709.scope: Deactivated successfully.
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.754910794 +0000 UTC m=+0.543887772 container died 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:00 compute-0 podman[77287]: 2025-12-13 03:44:00.770448517 +0000 UTC m=+0.173233656 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c04cfda457c3ee6f758ffee52011d242535fc1d0772aec7dd940f72f984ea43-merged.mount: Deactivated successfully.
Dec 13 03:44:00 compute-0 podman[77181]: 2025-12-13 03:44:00.823193876 +0000 UTC m=+0.612170854 container remove 2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709 (image=quay.io/ceph/ceph:v20, name=elegant_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:00 compute-0 systemd[1]: libpod-conmon-2a6bc25ff88b91a1b6fea4eee0b1d57247602380ca3b06c32db6c601534c7709.scope: Deactivated successfully.
Dec 13 03:44:00 compute-0 podman[77346]: 2025-12-13 03:44:00.882617375 +0000 UTC m=+0.039561395 container create 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:00 compute-0 systemd[1]: Started libpod-conmon-9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c.scope.
Dec 13 03:44:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7570b2d32303ddc5fa840a4825aacfda51959dfada6f045844efe9fb80c7d217/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7570b2d32303ddc5fa840a4825aacfda51959dfada6f045844efe9fb80c7d217/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7570b2d32303ddc5fa840a4825aacfda51959dfada6f045844efe9fb80c7d217/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:00 compute-0 podman[77346]: 2025-12-13 03:44:00.864419447 +0000 UTC m=+0.021363487 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:00 compute-0 podman[77346]: 2025-12-13 03:44:00.963077833 +0000 UTC m=+0.120021883 container init 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:00 compute-0 podman[77346]: 2025-12-13 03:44:00.968310132 +0000 UTC m=+0.125254142 container start 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:00 compute-0 podman[77346]: 2025-12-13 03:44:00.976490713 +0000 UTC m=+0.133434733 container attach 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:44:01 compute-0 sudo[77184]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:01 compute-0 sudo[77399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:01 compute-0 sudo[77399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:01 compute-0 sudo[77399]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:01 compute-0 ceph-mon[75071]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:01 compute-0 ceph-mon[75071]: Saving service crash spec with placement *
Dec 13 03:44:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:01 compute-0 ceph-mon[75071]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Dec 13 03:44:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2370690712' entity='client.admin' 
Dec 13 03:44:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:01 compute-0 sudo[77443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:44:01 compute-0 sudo[77443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:01 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77479 (sysctl)
Dec 13 03:44:01 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec 13 03:44:01 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Dec 13 03:44:01 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec 13 03:44:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:01 compute-0 systemd[1]: libpod-9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c.scope: Deactivated successfully.
Dec 13 03:44:01 compute-0 podman[77346]: 2025-12-13 03:44:01.688277459 +0000 UTC m=+0.845221479 container died 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-7570b2d32303ddc5fa840a4825aacfda51959dfada6f045844efe9fb80c7d217-merged.mount: Deactivated successfully.
Dec 13 03:44:01 compute-0 podman[77346]: 2025-12-13 03:44:01.737863907 +0000 UTC m=+0.894807927 container remove 9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c (image=quay.io/ceph/ceph:v20, name=compassionate_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Dec 13 03:44:01 compute-0 systemd[1]: libpod-conmon-9a32a7628b45bb7334063737a723fc12f1af426ebf8b536573121d684a0f134c.scope: Deactivated successfully.
Dec 13 03:44:01 compute-0 sudo[77443]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:01 compute-0 podman[77514]: 2025-12-13 03:44:01.798644476 +0000 UTC m=+0.038645020 container create b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:01 compute-0 sudo[77521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:01 compute-0 sudo[77521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:01 compute-0 sudo[77521]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:01 compute-0 systemd[1]: Started libpod-conmon-b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479.scope.
Dec 13 03:44:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46902f98d4c2587479a603bb4723ddf58891a428805624ef9908638acaeb9994/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:01 compute-0 podman[77514]: 2025-12-13 03:44:01.782105225 +0000 UTC m=+0.022105789 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46902f98d4c2587479a603bb4723ddf58891a428805624ef9908638acaeb9994/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46902f98d4c2587479a603bb4723ddf58891a428805624ef9908638acaeb9994/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:01 compute-0 sudo[77555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 03:44:01 compute-0 podman[77514]: 2025-12-13 03:44:01.892464152 +0000 UTC m=+0.132464726 container init b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:44:01 compute-0 sudo[77555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:01 compute-0 podman[77514]: 2025-12-13 03:44:01.900978214 +0000 UTC m=+0.140978748 container start b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:44:01 compute-0 podman[77514]: 2025-12-13 03:44:01.905116872 +0000 UTC m=+0.145117416 container attach b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:02 compute-0 ceph-mon[75071]: pgmap v2: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:02 compute-0 sudo[77555]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:02 compute-0 sudo[77623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:02 compute-0 sudo[77623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:02 compute-0 sudo[77623]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:02 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:02 compute-0 sudo[77648]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- inventory --format=json-pretty --filter-for-batch
Dec 13 03:44:02 compute-0 sudo[77648]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:02 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:44:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:02 compute-0 ceph-mgr[75360]: [cephadm INFO root] Added label _admin to host compute-0
Dec 13 03:44:02 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Dec 13 03:44:02 compute-0 beautiful_swartz[77561]: Added label _admin to host compute-0
Dec 13 03:44:02 compute-0 systemd[1]: libpod-b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479.scope: Deactivated successfully.
Dec 13 03:44:02 compute-0 podman[77514]: 2025-12-13 03:44:02.361335902 +0000 UTC m=+0.601336446 container died b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:44:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-46902f98d4c2587479a603bb4723ddf58891a428805624ef9908638acaeb9994-merged.mount: Deactivated successfully.
Dec 13 03:44:02 compute-0 podman[77514]: 2025-12-13 03:44:02.399671271 +0000 UTC m=+0.639671815 container remove b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479 (image=quay.io/ceph/ceph:v20, name=beautiful_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:02 compute-0 systemd[1]: libpod-conmon-b605a241fe03ffe32e4771fb7de7e6f4d420582462fc58c12053aba98f62d479.scope: Deactivated successfully.
Dec 13 03:44:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:02 compute-0 podman[77688]: 2025-12-13 03:44:02.502414362 +0000 UTC m=+0.062871588 container create 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:44:02 compute-0 systemd[1]: Started libpod-conmon-6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d.scope.
Dec 13 03:44:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:02 compute-0 podman[77688]: 2025-12-13 03:44:02.470709071 +0000 UTC m=+0.031166307 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e750e882e343502d93ee96101bdaa35550e4dcb457b7b5d690daa925b22bb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e750e882e343502d93ee96101bdaa35550e4dcb457b7b5d690daa925b22bb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54e750e882e343502d93ee96101bdaa35550e4dcb457b7b5d690daa925b22bb7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:02 compute-0 podman[77688]: 2025-12-13 03:44:02.720238354 +0000 UTC m=+0.280695580 container init 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:44:02 compute-0 podman[77688]: 2025-12-13 03:44:02.724953949 +0000 UTC m=+0.285411155 container start 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:44:02 compute-0 podman[77688]: 2025-12-13 03:44:02.963415327 +0000 UTC m=+0.523872573 container attach 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:44:03 compute-0 podman[77718]: 2025-12-13 03:44:03.011056682 +0000 UTC m=+0.459119863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:04 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Dec 13 03:44:04 compute-0 ceph-mon[75071]: from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:04 compute-0 podman[77718]: 2025-12-13 03:44:04.778417943 +0000 UTC m=+2.226481094 container create 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:05 compute-0 systemd[1]: Started libpod-conmon-0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae.scope.
Dec 13 03:44:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:05 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1191602661' entity='client.admin' 
Dec 13 03:44:05 compute-0 hopeful_johnson[77716]: set mgr/dashboard/cluster/status
Dec 13 03:44:05 compute-0 systemd[1]: libpod-6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d.scope: Deactivated successfully.
Dec 13 03:44:06 compute-0 podman[77718]: 2025-12-13 03:44:06.013497092 +0000 UTC m=+3.461560273 container init 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:44:06 compute-0 podman[77718]: 2025-12-13 03:44:06.019218786 +0000 UTC m=+3.467281937 container start 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:06 compute-0 magical_zhukovsky[77757]: 167 167
Dec 13 03:44:06 compute-0 systemd[1]: libpod-0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae.scope: Deactivated successfully.
Dec 13 03:44:06 compute-0 ceph-mon[75071]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:06 compute-0 ceph-mon[75071]: Added label _admin to host compute-0
Dec 13 03:44:06 compute-0 ceph-mon[75071]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:06 compute-0 ceph-mon[75071]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:06 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:06 compute-0 podman[77718]: 2025-12-13 03:44:06.277858048 +0000 UTC m=+3.725921239 container attach 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:06 compute-0 podman[77718]: 2025-12-13 03:44:06.278876397 +0000 UTC m=+3.726939558 container died 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:44:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-8b68f8c6ef33966efa28a7cb59b4e83c012111bfb77e121bc8f2275ff3a6c49f-merged.mount: Deactivated successfully.
Dec 13 03:44:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1191602661' entity='client.admin' 
Dec 13 03:44:07 compute-0 podman[77718]: 2025-12-13 03:44:07.344296075 +0000 UTC m=+4.792359226 container remove 0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_zhukovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:44:07 compute-0 systemd[1]: libpod-conmon-0c9931cabc0ea117535daad560b2d25202195a2bc98bd7adc691b0ca3601d7ae.scope: Deactivated successfully.
Dec 13 03:44:07 compute-0 podman[77688]: 2025-12-13 03:44:07.402768556 +0000 UTC m=+4.963225772 container died 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-54e750e882e343502d93ee96101bdaa35550e4dcb457b7b5d690daa925b22bb7-merged.mount: Deactivated successfully.
Dec 13 03:44:07 compute-0 podman[77761]: 2025-12-13 03:44:07.534380257 +0000 UTC m=+1.546574016 container remove 6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d (image=quay.io/ceph/ceph:v20, name=hopeful_johnson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:44:07 compute-0 systemd[1]: libpod-conmon-6464f6c6da3086c2bbf39e2964bccda2b515b8bb55391166ddde0eb937d7a76d.scope: Deactivated successfully.
Dec 13 03:44:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:07 compute-0 systemd[1]: Reloading.
Dec 13 03:44:07 compute-0 systemd-rc-local-generator[77821]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:07 compute-0 systemd-sysv-generator[77825]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:07 compute-0 sudo[74005]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:08 compute-0 podman[77839]: 2025-12-13 03:44:07.985536283 +0000 UTC m=+0.025809045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:08 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:08 compute-0 sudo[77876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obrlxujezldbjbykrsumetbzvpnjflid ; /usr/bin/python3'
Dec 13 03:44:08 compute-0 sudo[77876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:08 compute-0 python3[77878]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:08 compute-0 podman[77839]: 2025-12-13 03:44:08.58240434 +0000 UTC m=+0.622677072 container create 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:08 compute-0 ceph-mon[75071]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:08 compute-0 systemd[1]: Started libpod-conmon-5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59.scope.
Dec 13 03:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2533764fa5a6e8c09c7bdbde463529d85df3b4111bab0d3a3dfc75eb30e326e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2533764fa5a6e8c09c7bdbde463529d85df3b4111bab0d3a3dfc75eb30e326e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2533764fa5a6e8c09c7bdbde463529d85df3b4111bab0d3a3dfc75eb30e326e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2533764fa5a6e8c09c7bdbde463529d85df3b4111bab0d3a3dfc75eb30e326e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 podman[77839]: 2025-12-13 03:44:08.816188477 +0000 UTC m=+0.856461209 container init 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:44:08 compute-0 podman[77839]: 2025-12-13 03:44:08.823653649 +0000 UTC m=+0.863926381 container start 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:08 compute-0 podman[77879]: 2025-12-13 03:44:08.823757552 +0000 UTC m=+0.372415158 container create 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:44:08 compute-0 podman[77839]: 2025-12-13 03:44:08.826758377 +0000 UTC m=+0.867031129 container attach 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:08 compute-0 systemd[1]: Started libpod-conmon-9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f.scope.
Dec 13 03:44:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512cf59ab378c61c19db785ae1fd2bc716df100b3694e92fca786bd901e36105/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512cf59ab378c61c19db785ae1fd2bc716df100b3694e92fca786bd901e36105/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:08 compute-0 podman[77879]: 2025-12-13 03:44:08.802434846 +0000 UTC m=+0.351092462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:09 compute-0 podman[77879]: 2025-12-13 03:44:09.254821226 +0000 UTC m=+0.803478862 container init 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:44:09 compute-0 podman[77879]: 2025-12-13 03:44:09.262895165 +0000 UTC m=+0.811552781 container start 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:44:09 compute-0 podman[77879]: 2025-12-13 03:44:09.363730481 +0000 UTC m=+0.912388217 container attach 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]: [
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:     {
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "available": false,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "being_replaced": false,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "ceph_device_lvm": false,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "lsm_data": {},
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "lvs": [],
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "path": "/dev/sr0",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "rejected_reasons": [
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "Insufficient space (<5GB)",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "Has a FileSystem"
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         ],
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         "sys_api": {
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "actuators": null,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "device_nodes": [
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:                 "sr0"
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             ],
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "devname": "sr0",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "human_readable_size": "482.00 KB",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "id_bus": "ata",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "model": "QEMU DVD-ROM",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "nr_requests": "2",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "parent": "/dev/sr0",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "partitions": {},
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "path": "/dev/sr0",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "removable": "1",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "rev": "2.5+",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "ro": "0",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "rotational": "1",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "sas_address": "",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "sas_device_handle": "",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "scheduler_mode": "mq-deadline",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "sectors": 0,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "sectorsize": "2048",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "size": 493568.0,
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "support_discard": "2048",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "type": "disk",
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:             "vendor": "QEMU"
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:         }
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]:     }
Dec 13 03:44:09 compute-0 vigilant_babbage[77893]: ]
Dec 13 03:44:09 compute-0 systemd[1]: libpod-5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59.scope: Deactivated successfully.
Dec 13 03:44:09 compute-0 podman[77839]: 2025-12-13 03:44:09.462936962 +0000 UTC m=+1.503209704 container died 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Dec 13 03:44:10 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2486448376' entity='client.admin' 
Dec 13 03:44:10 compute-0 systemd[1]: libpod-9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f.scope: Deactivated successfully.
Dec 13 03:44:10 compute-0 ceph-mon[75071]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:10 compute-0 podman[77879]: 2025-12-13 03:44:10.789673578 +0000 UTC m=+2.338331214 container died 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-2533764fa5a6e8c09c7bdbde463529d85df3b4111bab0d3a3dfc75eb30e326e9-merged.mount: Deactivated successfully.
Dec 13 03:44:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-512cf59ab378c61c19db785ae1fd2bc716df100b3694e92fca786bd901e36105-merged.mount: Deactivated successfully.
Dec 13 03:44:10 compute-0 podman[77839]: 2025-12-13 03:44:10.828295956 +0000 UTC m=+2.868568688 container remove 5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_babbage, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:10 compute-0 podman[77879]: 2025-12-13 03:44:10.841914813 +0000 UTC m=+2.390572429 container remove 9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f (image=quay.io/ceph/ceph:v20, name=goofy_zhukovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:44:10 compute-0 systemd[1]: libpod-conmon-5763397dd46bc96d34f6d86177fd788100cc83dc9424ef68ec96ea85a9991b59.scope: Deactivated successfully.
Dec 13 03:44:10 compute-0 sudo[77876]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:10 compute-0 systemd[1]: libpod-conmon-9dac418dabc5111c8fe4de691e47341adcd8359000bdabc7c878a1785a34a90f.scope: Deactivated successfully.
Dec 13 03:44:10 compute-0 sudo[77648]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:44:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:10 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Dec 13 03:44:10 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Dec 13 03:44:10 compute-0 sudo[78646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 13 03:44:10 compute-0 sudo[78646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:10 compute-0 sudo[78646]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph
Dec 13 03:44:11 compute-0 sudo[78671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78671]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[78696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78696]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:11 compute-0 sudo[78721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78721]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[78746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78746]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[78817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78817]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[78871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78871]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78919]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Dec 13 03:44:11 compute-0 sudo[78919]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78919]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf
Dec 13 03:44:11 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf
Dec 13 03:44:11 compute-0 sudo[78944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config
Dec 13 03:44:11 compute-0 sudo[78944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78944]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config
Dec 13 03:44:11 compute-0 sudo[78969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78969]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[78994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[78994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[78994]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[79042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:11 compute-0 sudo[79042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79042]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[79091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[79091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79091]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[79140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydzthuzgktghvikrgvisdboikcewzcds ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597451.2277699-36369-152265567055939/async_wrapper.py j884660036718 30 /home/zuul/.ansible/tmp/ansible-tmp-1765597451.2277699-36369-152265567055939/AnsiballZ_command.py _'
Dec 13 03:44:11 compute-0 sudo[79140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:11 compute-0 sudo[79167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[79167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79167]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 ceph-mon[75071]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2486448376' entity='client.admin' 
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:11 compute-0 ceph-mon[75071]: Updating compute-0:/etc/ceph/ceph.conf
Dec 13 03:44:11 compute-0 sudo[79192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf.new
Dec 13 03:44:11 compute-0 sudo[79192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79192]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 ansible-async_wrapper.py[79144]: Invoked with j884660036718 30 /home/zuul/.ansible/tmp/ansible-tmp-1765597451.2277699-36369-152265567055939/AnsiballZ_command.py _
Dec 13 03:44:11 compute-0 ansible-async_wrapper.py[79236]: Starting module and watcher
Dec 13 03:44:11 compute-0 ansible-async_wrapper.py[79236]: Start watching 79240 (30)
Dec 13 03:44:11 compute-0 ansible-async_wrapper.py[79240]: Start module (79240)
Dec 13 03:44:11 compute-0 ansible-async_wrapper.py[79144]: Return async_wrapper task started.
Dec 13 03:44:11 compute-0 sudo[79217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf.new /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf
Dec 13 03:44:11 compute-0 sudo[79140]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 sudo[79217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79217]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:11 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 03:44:11 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 03:44:11 compute-0 sudo[79247]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Dec 13 03:44:11 compute-0 sudo[79247]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:11 compute-0 sudo[79247]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 python3[79242]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:12 compute-0 sudo[79272]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph
Dec 13 03:44:12 compute-0 sudo[79272]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79272]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.060673749 +0000 UTC m=+0.039195035 container create 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:12 compute-0 systemd[1]: Started libpod-conmon-06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b.scope.
Dec 13 03:44:12 compute-0 sudo[79306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79306]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d4b79052df19a04605f8fb17bf815f4b884def447ec09a8dcb7816d92a5e17/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9d4b79052df19a04605f8fb17bf815f4b884def447ec09a8dcb7816d92a5e17/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.044775137 +0000 UTC m=+0.023296453 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.150920444 +0000 UTC m=+0.129441770 container init 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.15812365 +0000 UTC m=+0.136644956 container start 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.161541987 +0000 UTC m=+0.140063303 container attach 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:44:12 compute-0 sudo[79341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:12 compute-0 sudo[79341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79341]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 sudo[79367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79367]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:12 compute-0 sudo[79434]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79434]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79434]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 sudo[79459]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79459]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79459]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:12 compute-0 sudo[79484]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Dec 13 03:44:12 compute-0 sudo[79484]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79484]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring
Dec 13 03:44:12 compute-0 sudo[79509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config
Dec 13 03:44:12 compute-0 sudo[79509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79509]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:12 compute-0 sudo[79534]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config
Dec 13 03:44:12 compute-0 sudo[79534]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79534]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 sudo[79559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79559]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:44:12 compute-0 awesome_davinci[79337]: 
Dec 13 03:44:12 compute-0 awesome_davinci[79337]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 03:44:12 compute-0 systemd[1]: libpod-06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b.scope: Deactivated successfully.
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.648254703 +0000 UTC m=+0.626776259 container died 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:44:12 compute-0 sudo[79584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:12 compute-0 sudo[79584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79584]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9d4b79052df19a04605f8fb17bf815f4b884def447ec09a8dcb7816d92a5e17-merged.mount: Deactivated successfully.
Dec 13 03:44:12 compute-0 podman[79295]: 2025-12-13 03:44:12.686616904 +0000 UTC m=+0.665138200 container remove 06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b (image=quay.io/ceph/ceph:v20, name=awesome_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:44:12 compute-0 systemd[1]: libpod-conmon-06c84c6122923ded7fa1d5528d640c290c860e8ded83d46dde50e9dbe7d99b2b.scope: Deactivated successfully.
Dec 13 03:44:12 compute-0 sudo[79618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 ansible-async_wrapper.py[79240]: Module complete (79240)
Dec 13 03:44:12 compute-0 sudo[79618]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mon[75071]: Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.conf
Dec 13 03:44:12 compute-0 ceph-mon[75071]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Dec 13 03:44:12 compute-0 sudo[79670]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79670]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79670]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 sudo[79695]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring.new
Dec 13 03:44:12 compute-0 sudo[79695]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79695]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 sudo[79720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-437a9f04-06b7-56e3-8a4b-f52a1199dd32/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring.new /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring
Dec 13 03:44:12 compute-0 sudo[79720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:12 compute-0 sudo[79720]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:13 compute-0 sudo[79791]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdxpnwtdtaemnwrlqzaqctutgrckymro ; /usr/bin/python3'
Dec 13 03:44:13 compute-0 sudo[79791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:13 compute-0 python3[79793]: ansible-ansible.legacy.async_status Invoked with jid=j884660036718.79144 mode=status _async_dir=/root/.ansible_async
Dec 13 03:44:13 compute-0 sudo[79791]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:13 compute-0 sudo[79840]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szydpffjxhuzespqiqxvpbmbvkaqevkw ; /usr/bin/python3'
Dec 13 03:44:13 compute-0 sudo[79840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 983ca1ce-7949-455a-9139-b0429e0ff2f5 (Updating crash deployment (+1 -> 1))
Dec 13 03:44:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 03:44:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:13 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Dec 13 03:44:13 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Dec 13 03:44:13 compute-0 sudo[79843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:13 compute-0 sudo[79843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:13 compute-0 python3[79842]: ansible-ansible.legacy.async_status Invoked with jid=j884660036718.79144 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 03:44:13 compute-0 sudo[79843]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:13 compute-0 sudo[79840]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:13 compute-0 sudo[79868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:13 compute-0 sudo[79868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:13 compute-0 ceph-mon[75071]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:13 compute-0 ceph-mon[75071]: Updating compute-0:/var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/config/ceph.client.admin.keyring
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Dec 13 03:44:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:13 compute-0 sudo[79933]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewqjsbrurnrvbrqdytinhqaiuhfkjosy ; /usr/bin/python3'
Dec 13 03:44:13 compute-0 sudo[79933]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.097839411 +0000 UTC m=+0.042464128 container create c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:14 compute-0 systemd[1]: Started libpod-conmon-c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0.scope.
Dec 13 03:44:14 compute-0 python3[79941]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec 13 03:44:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:14 compute-0 sudo[79933]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.175381365 +0000 UTC m=+0.120006072 container init c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.080163219 +0000 UTC m=+0.024787956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.182990332 +0000 UTC m=+0.127615049 container start c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.185497243 +0000 UTC m=+0.130121980 container attach c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:44:14 compute-0 focused_robinson[79976]: 167 167
Dec 13 03:44:14 compute-0 systemd[1]: libpod-c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0.scope: Deactivated successfully.
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.187729026 +0000 UTC m=+0.132353743 container died c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:44:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-a3e5a1bc1d5ea5c3145c610e494c5ff13f77dac6fd184232105cd7f1d63238e0-merged.mount: Deactivated successfully.
Dec 13 03:44:14 compute-0 podman[79959]: 2025-12-13 03:44:14.223862613 +0000 UTC m=+0.168487330 container remove c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_robinson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:14 compute-0 systemd[1]: libpod-conmon-c7992437498cbc830f54ffbfd8920d736bb47750f15b033c0ab3ccea0773a0e0.scope: Deactivated successfully.
Dec 13 03:44:14 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:14 compute-0 systemd[1]: Reloading.
Dec 13 03:44:14 compute-0 systemd-rc-local-generator[80022]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:14 compute-0 systemd-sysv-generator[80026]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:14 compute-0 sudo[80054]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phkminwaemqpgnsbhlsdrdvpcvekhjpl ; /usr/bin/python3'
Dec 13 03:44:14 compute-0 sudo[80054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:14 compute-0 systemd[1]: Reloading.
Dec 13 03:44:14 compute-0 systemd-sysv-generator[80093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:14 compute-0 systemd-rc-local-generator[80089]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:14 compute-0 python3[80058]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:14 compute-0 podman[80097]: 2025-12-13 03:44:14.723682152 +0000 UTC m=+0.044866186 container create 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:14 compute-0 podman[80097]: 2025-12-13 03:44:14.700752331 +0000 UTC m=+0.021936365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:14 compute-0 ceph-mon[75071]: Deploying daemon crash.compute-0 on compute-0
Dec 13 03:44:14 compute-0 systemd[1]: Started libpod-conmon-9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1.scope.
Dec 13 03:44:14 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe4bab1b6030690d23b1e763813c668632c9588791b52544bf2920e851d339/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe4bab1b6030690d23b1e763813c668632c9588791b52544bf2920e851d339/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fcfe4bab1b6030690d23b1e763813c668632c9588791b52544bf2920e851d339/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:14 compute-0 podman[80097]: 2025-12-13 03:44:14.875637012 +0000 UTC m=+0.196821046 container init 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:44:14 compute-0 podman[80097]: 2025-12-13 03:44:14.882243469 +0000 UTC m=+0.203427503 container start 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:44:14 compute-0 podman[80097]: 2025-12-13 03:44:14.88610337 +0000 UTC m=+0.207287404 container attach 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:15 compute-0 podman[80172]: 2025-12-13 03:44:15.04334064 +0000 UTC m=+0.044581879 container create 6b718116b43afc23c8b7a3fedd11dc28c2c13aa6abf1bedffb17349eecc2301c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39425d5ce65e8514294d1e6acd33fa4fc1fedd9f4983ca6367c588e374a8ef75/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39425d5ce65e8514294d1e6acd33fa4fc1fedd9f4983ca6367c588e374a8ef75/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39425d5ce65e8514294d1e6acd33fa4fc1fedd9f4983ca6367c588e374a8ef75/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39425d5ce65e8514294d1e6acd33fa4fc1fedd9f4983ca6367c588e374a8ef75/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 podman[80172]: 2025-12-13 03:44:15.100911406 +0000 UTC m=+0.102152655 container init 6b718116b43afc23c8b7a3fedd11dc28c2c13aa6abf1bedffb17349eecc2301c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:15 compute-0 podman[80172]: 2025-12-13 03:44:15.105356352 +0000 UTC m=+0.106597601 container start 6b718116b43afc23c8b7a3fedd11dc28c2c13aa6abf1bedffb17349eecc2301c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:44:15 compute-0 bash[80172]: 6b718116b43afc23c8b7a3fedd11dc28c2c13aa6abf1bedffb17349eecc2301c
Dec 13 03:44:15 compute-0 podman[80172]: 2025-12-13 03:44:15.023464935 +0000 UTC m=+0.024706214 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:15 compute-0 systemd[1]: Started Ceph crash.compute-0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: INFO:ceph-crash:pinging cluster to exercise our key
Dec 13 03:44:15 compute-0 sudo[79868]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 983ca1ce-7949-455a-9139-b0429e0ff2f5 (Updating crash deployment (+1 -> 1))
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 983ca1ce-7949-455a-9139-b0429e0ff2f5 (Updating crash deployment (+1 -> 1)) in 2 seconds
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 3fe48a81-5329-41bb-bb2a-3978bb4c01a7 (Updating mgr deployment (+1 -> 2))
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ckyycl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ckyycl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ckyycl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr services"} : dispatch
Dec 13 03:44:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.ckyycl on compute-0
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.ckyycl on compute-0
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.258+0000 7fa9ac9dd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.258+0000 7fa9ac9dd640 -1 AuthRegistry(0x7fa9a4052d90) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.259+0000 7fa9ac9dd640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.259+0000 7fa9ac9dd640 -1 AuthRegistry(0x7fa9ac9dbfe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.266+0000 7fa9aa752640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: 2025-12-13T03:44:15.266+0000 7fa9ac9dd640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: [errno 13] RADOS permission denied (error connecting to the cluster)
Dec 13 03:44:15 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-crash-compute-0[80195]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Dec 13 03:44:15 compute-0 sudo[80202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:15 compute-0 sudo[80202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:15 compute-0 sudo[80202]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:15 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:44:15 compute-0 hardcore_heyrovsky[80114]: 
Dec 13 03:44:15 compute-0 hardcore_heyrovsky[80114]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 03:44:15 compute-0 systemd[1]: libpod-9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1.scope: Deactivated successfully.
Dec 13 03:44:15 compute-0 podman[80097]: 2025-12-13 03:44:15.330471301 +0000 UTC m=+0.651655345 container died 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:15 compute-0 sudo[80237]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:15 compute-0 sudo[80237]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-fcfe4bab1b6030690d23b1e763813c668632c9588791b52544bf2920e851d339-merged.mount: Deactivated successfully.
Dec 13 03:44:15 compute-0 podman[80097]: 2025-12-13 03:44:15.371161399 +0000 UTC m=+0.692345433 container remove 9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1 (image=quay.io/ceph/ceph:v20, name=hardcore_heyrovsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:44:15 compute-0 systemd[1]: libpod-conmon-9211f4e8c2afc26400a1ac62c9336cd1ee016c1179d77e384312dc688ae93bc1.scope: Deactivated successfully.
Dec 13 03:44:15 compute-0 sudo[80054]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:15 compute-0 sudo[80348]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehcqyjlyvdxfwuuzyalxhvobdtenodoz ; /usr/bin/python3'
Dec 13 03:44:15 compute-0 sudo[80348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.694003186 +0000 UTC m=+0.020501933 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.803363175 +0000 UTC m=+0.129861902 container create d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:44:15 compute-0 python3[80356]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:15 compute-0 systemd[1]: Started libpod-conmon-d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd.scope.
Dec 13 03:44:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:15 compute-0 podman[80357]: 2025-12-13 03:44:15.919122575 +0000 UTC m=+0.067866710 container create e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.923219372 +0000 UTC m=+0.249718119 container init d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.932719352 +0000 UTC m=+0.259218079 container start d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.936309945 +0000 UTC m=+0.262808672 container attach d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:44:15 compute-0 nervous_northcutt[80371]: 167 167
Dec 13 03:44:15 compute-0 systemd[1]: libpod-d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd.scope: Deactivated successfully.
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.938150456 +0000 UTC m=+0.264649183 container died d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:44:15 compute-0 systemd[1]: Started libpod-conmon-e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d.scope.
Dec 13 03:44:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-5403aef95d632051664ff26733a20d0a03b63c9b95f6e8bc6f69a1c5d3e650bb-merged.mount: Deactivated successfully.
Dec 13 03:44:15 compute-0 podman[80325]: 2025-12-13 03:44:15.972634737 +0000 UTC m=+0.299133464 container remove d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:44:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:15 compute-0 podman[80357]: 2025-12-13 03:44:15.880847578 +0000 UTC m=+0.029591713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:15 compute-0 systemd[1]: libpod-conmon-d92e5b65b5b64546a3bc69e8bde2127b6bbd8609bb96784f66b57ea043f637fd.scope: Deactivated successfully.
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3699c01b3f3e5f5c61cb8d63a8eaa42bbe0ab0aab14679210feff241224eb60/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3699c01b3f3e5f5c61cb8d63a8eaa42bbe0ab0aab14679210feff241224eb60/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3699c01b3f3e5f5c61cb8d63a8eaa42bbe0ab0aab14679210feff241224eb60/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:15 compute-0 podman[80357]: 2025-12-13 03:44:15.991169234 +0000 UTC m=+0.139913369 container init e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:15 compute-0 podman[80357]: 2025-12-13 03:44:15.999684416 +0000 UTC m=+0.148428541 container start e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:16 compute-0 podman[80357]: 2025-12-13 03:44:16.003735051 +0000 UTC m=+0.152479176 container attach e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:16 compute-0 systemd[1]: Reloading.
Dec 13 03:44:16 compute-0 systemd-rc-local-generator[80424]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:16 compute-0 systemd-sysv-generator[80427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:16 compute-0 ceph-mon[75071]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.ckyycl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ckyycl", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr services"} : dispatch
Dec 13 03:44:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:16 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:16 compute-0 systemd[1]: Reloading.
Dec 13 03:44:16 compute-0 systemd-rc-local-generator[80482]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:16 compute-0 systemd-sysv-generator[80487]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Dec 13 03:44:16 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3952828346' entity='client.admin' 
Dec 13 03:44:16 compute-0 podman[80357]: 2025-12-13 03:44:16.468685359 +0000 UTC m=+0.617429504 container died e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:44:16 compute-0 systemd[1]: libpod-e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d.scope: Deactivated successfully.
Dec 13 03:44:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3699c01b3f3e5f5c61cb8d63a8eaa42bbe0ab0aab14679210feff241224eb60-merged.mount: Deactivated successfully.
Dec 13 03:44:16 compute-0 systemd[1]: Starting Ceph mgr.compute-0.ckyycl for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:16 compute-0 podman[80357]: 2025-12-13 03:44:16.578292114 +0000 UTC m=+0.727036229 container remove e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d (image=quay.io/ceph/ceph:v20, name=angry_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:16 compute-0 systemd[1]: libpod-conmon-e5373b5a750bcbb169d6a8440562bddaea14e9a15cff805d5f2d18ebf4f8479d.scope: Deactivated successfully.
Dec 13 03:44:16 compute-0 sudo[80348]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:16 compute-0 sudo[80577]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqpwhcjengptmjudejjumggulzlpkxsa ; /usr/bin/python3'
Dec 13 03:44:16 compute-0 sudo[80577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:16 compute-0 podman[80578]: 2025-12-13 03:44:16.748796692 +0000 UTC m=+0.021732619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:16 compute-0 ansible-async_wrapper.py[79236]: Done in kid B.
Dec 13 03:44:16 compute-0 podman[80578]: 2025-12-13 03:44:16.898441595 +0000 UTC m=+0.171377502 container create 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9fd06d2fc480bb740a3edd2506b828544bdc7aef4fd63a141772e8deb1ec6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9fd06d2fc480bb740a3edd2506b828544bdc7aef4fd63a141772e8deb1ec6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9fd06d2fc480bb740a3edd2506b828544bdc7aef4fd63a141772e8deb1ec6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac9fd06d2fc480bb740a3edd2506b828544bdc7aef4fd63a141772e8deb1ec6a/merged/var/lib/ceph/mgr/ceph-compute-0.ckyycl supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:16 compute-0 podman[80578]: 2025-12-13 03:44:16.957992108 +0000 UTC m=+0.230928055 container init 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:44:16 compute-0 podman[80578]: 2025-12-13 03:44:16.965355557 +0000 UTC m=+0.238291464 container start 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:44:16 compute-0 bash[80578]: 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b
Dec 13 03:44:16 compute-0 systemd[1]: Started Ceph mgr.compute-0.ckyycl for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:16 compute-0 python3[80585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: pidfile_write: ignore empty --pid-file
Dec 13 03:44:17 compute-0 sudo[80237]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 3fe48a81-5329-41bb-bb2a-3978bb4c01a7 (Updating mgr deployment (+1 -> 2))
Dec 13 03:44:17 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 3fe48a81-5329-41bb-bb2a-3978bb4c01a7 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'alerts'
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.116951127 +0000 UTC m=+0.111502701 container create 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.031703204 +0000 UTC m=+0.026254798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'balancer'
Dec 13 03:44:17 compute-0 sudo[80634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:44:17 compute-0 sudo[80634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:17 compute-0 sudo[80634]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:17 compute-0 ceph-mon[75071]: Deploying daemon mgr.compute-0.ckyycl on compute-0
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3952828346' entity='client.admin' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 systemd[1]: Started libpod-conmon-8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f.scope.
Dec 13 03:44:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:17 compute-0 sudo[80659]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26cc90308b55eb733ba2a5e613201d7efa38e37c3811748f6306574a86f21444/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26cc90308b55eb733ba2a5e613201d7efa38e37c3811748f6306574a86f21444/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26cc90308b55eb733ba2a5e613201d7efa38e37c3811748f6306574a86f21444/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:17 compute-0 sudo[80659]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:17 compute-0 sudo[80659]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:17 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 2 completed events
Dec 13 03:44:17 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'cephadm'
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.283192562 +0000 UTC m=+0.277744166 container init 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.291047306 +0000 UTC m=+0.285598880 container start 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.298069406 +0000 UTC m=+0.292620980 container attach 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 03:44:17 compute-0 sudo[80689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:44:17 compute-0 sudo[80689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Dec 13 03:44:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/360954839' entity='client.admin' 
Dec 13 03:44:17 compute-0 systemd[1]: libpod-8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f.scope: Deactivated successfully.
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.781706814 +0000 UTC m=+0.776258398 container died 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:44:17 compute-0 podman[80777]: 2025-12-13 03:44:17.785273236 +0000 UTC m=+0.095993151 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-26cc90308b55eb733ba2a5e613201d7efa38e37c3811748f6306574a86f21444-merged.mount: Deactivated successfully.
Dec 13 03:44:17 compute-0 podman[80600]: 2025-12-13 03:44:17.831409157 +0000 UTC m=+0.825960731 container remove 8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f (image=quay.io/ceph/ceph:v20, name=hopeful_curran, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:44:17 compute-0 sudo[80577]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:17 compute-0 systemd[1]: libpod-conmon-8bb964f78b942c5d46fb3e21de2083617682f1f22965eaec89198a3fab149a7f.scope: Deactivated successfully.
Dec 13 03:44:17 compute-0 podman[80777]: 2025-12-13 03:44:17.903164057 +0000 UTC m=+0.213884112 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:44:18 compute-0 sudo[80888]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pydwagkuhplwlkzxkecbhooeowqwfdvk ; /usr/bin/python3'
Dec 13 03:44:18 compute-0 sudo[80888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:18 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'crash'
Dec 13 03:44:18 compute-0 python3[80898]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:18 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'dashboard'
Dec 13 03:44:18 compute-0 ceph-mon[75071]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:18 compute-0 sudo[80689]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/360954839' entity='client.admin' 
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:18 compute-0 podman[80942]: 2025-12-13 03:44:18.259349012 +0000 UTC m=+0.036204360 container create c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:18 compute-0 systemd[1]: Started libpod-conmon-c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064.scope.
Dec 13 03:44:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:18 compute-0 sudo[80957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b978552fc2a2f02eda506d5fa0524d08e77ea9a0424ece5ed8ae6919f859318e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b978552fc2a2f02eda506d5fa0524d08e77ea9a0424ece5ed8ae6919f859318e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b978552fc2a2f02eda506d5fa0524d08e77ea9a0424ece5ed8ae6919f859318e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:18 compute-0 sudo[80957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:18 compute-0 sudo[80957]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 03:44:18 compute-0 podman[80942]: 2025-12-13 03:44:18.241881276 +0000 UTC m=+0.018736654 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:18 compute-0 podman[80942]: 2025-12-13 03:44:18.340496729 +0000 UTC m=+0.117352107 container init c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:44:18 compute-0 podman[80942]: 2025-12-13 03:44:18.347746835 +0000 UTC m=+0.124602183 container start c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:44:18 compute-0 podman[80942]: 2025-12-13 03:44:18.350957187 +0000 UTC m=+0.127812535 container attach c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:18 compute-0 sudo[80986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:18 compute-0 sudo[80986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:18 compute-0 sudo[80986]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:18 compute-0 sudo[81011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:18 compute-0 sudo[81011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Dec 13 03:44:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237883145' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 03:44:18 compute-0 podman[81071]: 2025-12-13 03:44:18.728243931 +0000 UTC m=+0.022141100 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'devicehealth'
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'diskprediction_local'
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.273234435 +0000 UTC m=+0.567131574 container create 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl[80595]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Dec 13 03:44:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl[80595]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Dec 13 03:44:19 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl[80595]:   from numpy import show_config as show_numpy_config
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'influx'
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'insights'
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'iostat'
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'k8sevents'
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:19 compute-0 ceph-mon[75071]: Reconfiguring mon.compute-0 (unknown last config time)...
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: Reconfiguring daemon mon.compute-0 on compute-0
Dec 13 03:44:19 compute-0 ceph-mon[75071]: pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/237883145' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237883145' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Dec 13 03:44:19 compute-0 optimistic_jepsen[80970]: set require_min_compat_client to mimic
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Dec 13 03:44:19 compute-0 systemd[1]: Started libpod-conmon-5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707.scope.
Dec 13 03:44:19 compute-0 systemd[1]: libpod-c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064.scope: Deactivated successfully.
Dec 13 03:44:19 compute-0 podman[80942]: 2025-12-13 03:44:19.710554436 +0000 UTC m=+1.487409784 container died c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:44:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-b978552fc2a2f02eda506d5fa0524d08e77ea9a0424ece5ed8ae6919f859318e-merged.mount: Deactivated successfully.
Dec 13 03:44:19 compute-0 podman[80942]: 2025-12-13 03:44:19.84401492 +0000 UTC m=+1.620870308 container remove c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064 (image=quay.io/ceph/ceph:v20, name=optimistic_jepsen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.852356287 +0000 UTC m=+1.146253466 container init 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.857263387 +0000 UTC m=+1.151160536 container start 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.861048505 +0000 UTC m=+1.154945654 container attach 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:44:19 compute-0 cranky_galois[81090]: 167 167
Dec 13 03:44:19 compute-0 systemd[1]: libpod-conmon-c883fcd6b962f38be27c3bb9434630c23b57c8173cd5e0ed2352ef6f2b8e6064.scope: Deactivated successfully.
Dec 13 03:44:19 compute-0 systemd[1]: libpod-5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707.scope: Deactivated successfully.
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.867790996 +0000 UTC m=+1.161688145 container died 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:19 compute-0 sudo[80888]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-4894bf84be69bc793883aa365b88052181e5a2a3469776c7e6fe1b85fb647ef5-merged.mount: Deactivated successfully.
Dec 13 03:44:19 compute-0 podman[81071]: 2025-12-13 03:44:19.912262411 +0000 UTC m=+1.206159550 container remove 5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707 (image=quay.io/ceph/ceph:v20, name=cranky_galois, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:44:19 compute-0 systemd[1]: libpod-conmon-5e493a69a82b96a5d83f28f96529dd8a49f9175dd15861c814a643279d0d5707.scope: Deactivated successfully.
Dec 13 03:44:19 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'localpool'
Dec 13 03:44:19 compute-0 sudo[81011]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:19 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.gsxkyu (unknown last config time)...
Dec 13 03:44:19 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.gsxkyu (unknown last config time)...
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.gsxkyu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gsxkyu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr services"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:19 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.gsxkyu on compute-0
Dec 13 03:44:19 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.gsxkyu on compute-0
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'mds_autoscaler'
Dec 13 03:44:20 compute-0 sudo[81119]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:20 compute-0 sudo[81119]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:20 compute-0 sudo[81119]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:20 compute-0 sudo[81144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:20 compute-0 sudo[81144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:20 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'mirroring'
Dec 13 03:44:20 compute-0 sudo[81207]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iekmvgwuzadlyryrbiwgzqbijsduuxxj ; /usr/bin/python3'
Dec 13 03:44:20 compute-0 sudo[81207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'nfs'
Dec 13 03:44:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.366257736 +0000 UTC m=+0.031284430 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:20 compute-0 python3[81215]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'orchestrator'
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.686025116 +0000 UTC m=+0.351051810 container create d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/237883145' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Dec 13 03:44:20 compute-0 ceph-mon[75071]: osdmap e3: 0 total, 0 up, 0 in
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:20 compute-0 ceph-mon[75071]: Reconfiguring mgr.compute-0.gsxkyu (unknown last config time)...
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.gsxkyu", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mgr services"} : dispatch
Dec 13 03:44:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:20 compute-0 ceph-mon[75071]: Reconfiguring daemon mgr.compute-0.gsxkyu on compute-0
Dec 13 03:44:20 compute-0 systemd[1]: Started libpod-conmon-d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2.scope.
Dec 13 03:44:20 compute-0 podman[81224]: 2025-12-13 03:44:20.75614395 +0000 UTC m=+0.283658985 container create 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.776807417 +0000 UTC m=+0.441834131 container init d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.787669686 +0000 UTC m=+0.452696380 container start d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:20 compute-0 nervous_spence[81237]: 167 167
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.792001089 +0000 UTC m=+0.457027803 container attach d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.792512323 +0000 UTC m=+0.457539017 container died d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:20 compute-0 systemd[1]: Started libpod-conmon-21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089.scope.
Dec 13 03:44:20 compute-0 systemd[1]: libpod-d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2.scope: Deactivated successfully.
Dec 13 03:44:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-faa074243be6e141d7379da31ef07cdfe2cfbcddfc8c8c876719bde9350d4342-merged.mount: Deactivated successfully.
Dec 13 03:44:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:20 compute-0 podman[81224]: 2025-12-13 03:44:20.729681128 +0000 UTC m=+0.257196143 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ac6c0139fdf0ef373e50941c92fe6ab954a9c45e488daf99c36e89c60e1a4e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ac6c0139fdf0ef373e50941c92fe6ab954a9c45e488daf99c36e89c60e1a4e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72ac6c0139fdf0ef373e50941c92fe6ab954a9c45e488daf99c36e89c60e1a4e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:20 compute-0 podman[81208]: 2025-12-13 03:44:20.829079023 +0000 UTC m=+0.494105717 container remove d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2 (image=quay.io/ceph/ceph:v20, name=nervous_spence, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:44:20 compute-0 podman[81224]: 2025-12-13 03:44:20.838671616 +0000 UTC m=+0.366186641 container init 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:20 compute-0 systemd[1]: libpod-conmon-d685e80c01aae9ba56ed3dfdab6447ac6031388db439539a9b616f81f96975b2.scope: Deactivated successfully.
Dec 13 03:44:20 compute-0 podman[81224]: 2025-12-13 03:44:20.844166243 +0000 UTC m=+0.371681248 container start 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:20 compute-0 podman[81224]: 2025-12-13 03:44:20.847320962 +0000 UTC m=+0.374835987 container attach 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:44:20 compute-0 sudo[81144]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'osd_perf_query'
Dec 13 03:44:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:20 compute-0 sudo[81263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:20 compute-0 sudo[81263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:20 compute-0 sudo[81263]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:20 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'osd_support'
Dec 13 03:44:21 compute-0 sudo[81307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:44:21 compute-0 sudo[81307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'pg_autoscaler'
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'progress'
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'prometheus'
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:21 compute-0 sudo[81361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:21 compute-0 sudo[81361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:21 compute-0 sudo[81361]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:21 compute-0 sudo[81402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Dec 13 03:44:21 compute-0 sudo[81402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:21 compute-0 podman[81407]: 2025-12-13 03:44:21.413983831 +0000 UTC m=+0.056108086 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:44:21 compute-0 podman[81407]: 2025-12-13 03:44:21.502006202 +0000 UTC m=+0.144130447 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'rbd_support'
Dec 13 03:44:21 compute-0 sudo[81402]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: [cephadm INFO root] Added host compute-0
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Added host compute-0
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mon spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'rgw'
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Dec 13 03:44:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Dec 13 03:44:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:21 compute-0 wizardly_kalam[81254]: Added host 'compute-0' with addr '192.168.122.100'
Dec 13 03:44:21 compute-0 wizardly_kalam[81254]: Scheduled mon update...
Dec 13 03:44:21 compute-0 wizardly_kalam[81254]: Scheduled mgr update...
Dec 13 03:44:21 compute-0 wizardly_kalam[81254]: Scheduled osd.default_drive_group update...
Dec 13 03:44:21 compute-0 podman[81224]: 2025-12-13 03:44:21.780088401 +0000 UTC m=+1.307603406 container died 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True)
Dec 13 03:44:21 compute-0 systemd[1]: libpod-21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089.scope: Deactivated successfully.
Dec 13 03:44:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-72ac6c0139fdf0ef373e50941c92fe6ab954a9c45e488daf99c36e89c60e1a4e-merged.mount: Deactivated successfully.
Dec 13 03:44:21 compute-0 podman[81224]: 2025-12-13 03:44:21.830425089 +0000 UTC m=+1.357940094 container remove 21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089 (image=quay.io/ceph/ceph:v20, name=wizardly_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:44:21 compute-0 systemd[1]: libpod-conmon-21cf5de76054b8f9e77ee8f4de618d236673a39ec9879bc107b3748c4be0f089.scope: Deactivated successfully.
Dec 13 03:44:21 compute-0 sudo[81207]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:21 compute-0 sudo[81307]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:21 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'rook'
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:22 compute-0 sudo[81594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlkciivxdmjbyrttkzdiysiswfjilujf ; /usr/bin/python3'
Dec 13 03:44:22 compute-0 sudo[81594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:22 compute-0 python3[81596]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:22 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:22 compute-0 ceph-mon[75071]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.289218719 +0000 UTC m=+0.048578361 container create ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:22 compute-0 systemd[1]: Started libpod-conmon-ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3.scope.
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29405e22e03bb887c168f629ee43fa0ea98abdfc676c93a87475bd77f38dc5a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29405e22e03bb887c168f629ee43fa0ea98abdfc676c93a87475bd77f38dc5a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29405e22e03bb887c168f629ee43fa0ea98abdfc676c93a87475bd77f38dc5a6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.261526051 +0000 UTC m=+0.020885703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.428759989 +0000 UTC m=+0.188119661 container init ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.437782966 +0000 UTC m=+0.197142608 container start ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.441135829 +0000 UTC m=+0.200495491 container attach ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:22 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 3469d268-deb2-4f04-9261-a46807d9f6cd (Updating mgr deployment (-1 -> 1))
Dec 13 03:44:22 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.ckyycl from compute-0 -- ports [8765]
Dec 13 03:44:22 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.ckyycl from compute-0 -- ports [8765]
Dec 13 03:44:22 compute-0 sudo[81618]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:22 compute-0 sudo[81618]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:22 compute-0 sudo[81618]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:22 compute-0 sudo[81643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --name mgr.compute-0.ckyycl --force --tcp-ports 8765
Dec 13 03:44:22 compute-0 sudo[81643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:22 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'selftest'
Dec 13 03:44:22 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'smb'
Dec 13 03:44:22 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.ckyycl for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 03:44:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4201542295' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:44:22 compute-0 fervent_chaplygin[81614]: 
Dec 13 03:44:22 compute-0 fervent_chaplygin[81614]: {"fsid":"437a9f04-06b7-56e3-8a4b-f52a1199dd32","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":67,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2025-12-13T03:43:11:652968+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-12-13T03:43:11.655212+0000","services":{}},"progress_events":{}}
Dec 13 03:44:22 compute-0 systemd[1]: libpod-ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3.scope: Deactivated successfully.
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.943308896 +0000 UTC m=+0.702668538 container died ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 03:44:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-29405e22e03bb887c168f629ee43fa0ea98abdfc676c93a87475bd77f38dc5a6-merged.mount: Deactivated successfully.
Dec 13 03:44:22 compute-0 podman[81598]: 2025-12-13 03:44:22.98507536 +0000 UTC m=+0.744435002 container remove ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3 (image=quay.io/ceph/ceph:v20, name=fervent_chaplygin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:44:23 compute-0 systemd[1]: libpod-conmon-ee3894dc4874aa0d49168adab59c86d0bae337fce7ac4afc5bd71169489c1dd3.scope: Deactivated successfully.
Dec 13 03:44:23 compute-0 sudo[81594]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:23 compute-0 ceph-mgr[80599]: mgr[py] Loading python module 'snap_schedule'
Dec 13 03:44:23 compute-0 podman[81731]: 2025-12-13 03:44:23.065841701 +0000 UTC m=+0.136407725 container died 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:44:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-ac9fd06d2fc480bb740a3edd2506b828544bdc7aef4fd63a141772e8deb1ec6a-merged.mount: Deactivated successfully.
Dec 13 03:44:23 compute-0 podman[81731]: 2025-12-13 03:44:23.110492794 +0000 UTC m=+0.181058818 container remove 6d792b8ae482b2be74d33d3c32641bed7176735cc44c52ccc0a4303147c49e4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:44:23 compute-0 bash[81731]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-ckyycl
Dec 13 03:44:23 compute-0 systemd[1]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mgr.compute-0.ckyycl.service: Main process exited, code=exited, status=143/n/a
Dec 13 03:44:23 compute-0 systemd[1]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mgr.compute-0.ckyycl.service: Failed with result 'exit-code'.
Dec 13 03:44:23 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.ckyycl for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:23 compute-0 systemd[1]: ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mgr.compute-0.ckyycl.service: Consumed 6.842s CPU time, 386.2M memory peak, read 0B from disk, written 172.5K to disk.
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Added host compute-0
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Saving service mon spec with placement compute-0
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Saving service mgr spec with placement compute-0
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Marking host: compute-0 for OSDSpec preview refresh.
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Saving service osd.default_drive_group spec with placement compute-0
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: Removing daemon mgr.compute-0.ckyycl from compute-0 -- ports [8765]
Dec 13 03:44:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4201542295' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:44:23 compute-0 systemd[1]: Reloading.
Dec 13 03:44:23 compute-0 systemd-rc-local-generator[81830]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:23 compute-0 systemd-sysv-generator[81834]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:23 compute-0 sudo[81643]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:23 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.ckyycl
Dec 13 03:44:23 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.ckyycl
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.ckyycl"} v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ckyycl"} : dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ckyycl"}]': finished
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 3469d268-deb2-4f04-9261-a46807d9f6cd (Updating mgr deployment (-1 -> 1))
Dec 13 03:44:23 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 3469d268-deb2-4f04-9261-a46807d9f6cd (Updating mgr deployment (-1 -> 1)) in 1 seconds
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:23 compute-0 sudo[81846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:23 compute-0 sudo[81846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:23 compute-0 sudo[81846]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:23 compute-0 sudo[81871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:44:23 compute-0 sudo[81871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.02682356 +0000 UTC m=+0.035298607 container create 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:24 compute-0 systemd[1]: Started libpod-conmon-2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0.scope.
Dec 13 03:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.101942136 +0000 UTC m=+0.110417183 container init 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.010799551 +0000 UTC m=+0.019274618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.110900442 +0000 UTC m=+0.119375489 container start 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.114174332 +0000 UTC m=+0.122649379 container attach 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:24 compute-0 epic_saha[81925]: 167 167
Dec 13 03:44:24 compute-0 systemd[1]: libpod-2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0.scope: Deactivated successfully.
Dec 13 03:44:24 compute-0 conmon[81925]: conmon 2be42bddcf096a3155b9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0.scope/container/memory.events
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.117382479 +0000 UTC m=+0.125857526 container died 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:44:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-dfb01bbdfd738d5ef6d4b7b95ec35810ae5172c1510d682149fb5baa0de49bfe-merged.mount: Deactivated successfully.
Dec 13 03:44:24 compute-0 podman[81909]: 2025-12-13 03:44:24.147951606 +0000 UTC m=+0.156426653 container remove 2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_saha, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:24 compute-0 systemd[1]: libpod-conmon-2be42bddcf096a3155b9d9d4090b0b17a74ada0660b2916491173da10840e5d0.scope: Deactivated successfully.
Dec 13 03:44:24 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:24 compute-0 podman[81949]: 2025-12-13 03:44:24.29421768 +0000 UTC m=+0.044606232 container create 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:24 compute-0 ceph-mon[75071]: Removing key for mgr.compute-0.ckyycl
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.ckyycl"} : dispatch
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.ckyycl"}]': finished
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:24 compute-0 systemd[1]: Started libpod-conmon-7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668.scope.
Dec 13 03:44:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:24 compute-0 podman[81949]: 2025-12-13 03:44:24.271249792 +0000 UTC m=+0.021638374 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:24 compute-0 podman[81949]: 2025-12-13 03:44:24.373707006 +0000 UTC m=+0.124095538 container init 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:24 compute-0 podman[81949]: 2025-12-13 03:44:24.383565237 +0000 UTC m=+0.133953769 container start 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:44:24 compute-0 podman[81949]: 2025-12-13 03:44:24.391646358 +0000 UTC m=+0.142034910 container attach 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:44:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4d39c086-933d-4bdc-977c-ec02bb2f333b
Dec 13 03:44:25 compute-0 ceph-mon[75071]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4d39c086-933d-4bdc-977c-ec02bb2f333b"} v 0)
Dec 13 03:44:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1936977090' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d39c086-933d-4bdc-977c-ec02bb2f333b"} : dispatch
Dec 13 03:44:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Dec 13 03:44:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1936977090' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d39c086-933d-4bdc-977c-ec02bb2f333b"}]': finished
Dec 13 03:44:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Dec 13 03:44:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Dec 13 03:44:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:25 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:25 compute-0 lvm[82059]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:25 compute-0 lvm[82059]: VG ceph_vg0 finished
Dec 13 03:44:25 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Dec 13 03:44:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 03:44:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/765625300' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:26 compute-0 quizzical_bhaskara[81966]:  stderr: got monmap epoch 1
Dec 13 03:44:26 compute-0 quizzical_bhaskara[81966]: --> Creating keyring file for osd.0
Dec 13 03:44:26 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Dec 13 03:44:26 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Dec 13 03:44:26 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid 4d39c086-933d-4bdc-977c-ec02bb2f333b --setuser ceph --setgroup ceph
Dec 13 03:44:26 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1936977090' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "4d39c086-933d-4bdc-977c-ec02bb2f333b"} : dispatch
Dec 13 03:44:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1936977090' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4d39c086-933d-4bdc-977c-ec02bb2f333b"}]': finished
Dec 13 03:44:26 compute-0 ceph-mon[75071]: osdmap e4: 1 total, 0 up, 1 in
Dec 13 03:44:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/765625300' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:26.266+0000 7f8044df78c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:26.293+0000 7f8044df78c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 83e0191b-e5d2-4854-84b3-247b63096122
Dec 13 03:44:27 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 3 completed events
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 03:44:27 compute-0 ceph-mon[75071]: pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e4 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "83e0191b-e5d2-4854-84b3-247b63096122"} v 0)
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1078082625' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "83e0191b-e5d2-4854-84b3-247b63096122"} : dispatch
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1078082625' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "83e0191b-e5d2-4854-84b3-247b63096122"}]': finished
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:27 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:27 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:27 compute-0 lvm[83011]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:27 compute-0 lvm[83011]: VG ceph_vg1 finished
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:27 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Dec 13 03:44:28 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:28 compute-0 ceph-mon[75071]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Dec 13 03:44:28 compute-0 ceph-mon[75071]: Cluster is now healthy
Dec 13 03:44:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1078082625' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "83e0191b-e5d2-4854-84b3-247b63096122"} : dispatch
Dec 13 03:44:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1078082625' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "83e0191b-e5d2-4854-84b3-247b63096122"}]': finished
Dec 13 03:44:28 compute-0 ceph-mon[75071]: osdmap e5: 2 total, 0 up, 2 in
Dec 13 03:44:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 03:44:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1358816604' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:28 compute-0 quizzical_bhaskara[81966]:  stderr: got monmap epoch 1
Dec 13 03:44:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:28 compute-0 quizzical_bhaskara[81966]: --> Creating keyring file for osd.1
Dec 13 03:44:28 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Dec 13 03:44:28 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Dec 13 03:44:28 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 83e0191b-e5d2-4854-84b3-247b63096122 --setuser ceph --setgroup ceph
Dec 13 03:44:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1358816604' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:29 compute-0 ceph-mon[75071]: pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:28.511+0000 7fbf220c08c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:28.543+0000 7fbf220c08c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:29 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f6f41095-5d06-4c49-86a2-78e3159dd7dc
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc"} v 0)
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/908601413' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc"} : dispatch
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/908601413' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc"}]': finished
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:29 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:29 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:29 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:30 compute-0 lvm[83966]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:30 compute-0 lvm[83966]: VG ceph_vg2 finished
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Dec 13 03:44:30 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/908601413' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc"} : dispatch
Dec 13 03:44:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/908601413' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc"}]': finished
Dec 13 03:44:30 compute-0 ceph-mon[75071]: osdmap e6: 3 total, 0 up, 3 in
Dec 13 03:44:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Dec 13 03:44:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882378984' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]:  stderr: got monmap epoch 1
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: --> Creating keyring file for osd.2
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Dec 13 03:44:30 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid f6f41095-5d06-4c49-86a2-78e3159dd7dc --setuser ceph --setgroup ceph
Dec 13 03:44:31 compute-0 ceph-mon[75071]: pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1882378984' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:30.646+0000 7fe4fb9f98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]:  stderr: 2025-12-13T03:44:30.663+0000 7fe4fb9f98c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 03:44:31 compute-0 quizzical_bhaskara[81966]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Dec 13 03:44:31 compute-0 systemd[1]: libpod-7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668.scope: Deactivated successfully.
Dec 13 03:44:31 compute-0 systemd[1]: libpod-7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668.scope: Consumed 5.711s CPU time.
Dec 13 03:44:31 compute-0 podman[84891]: 2025-12-13 03:44:31.768573188 +0000 UTC m=+0.028115391 container died 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:44:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3ff33a86e9fdc6ac67ab7cf6e11d77f6e2316dc51c02243a39e347031bb99e7-merged.mount: Deactivated successfully.
Dec 13 03:44:31 compute-0 podman[84891]: 2025-12-13 03:44:31.811485212 +0000 UTC m=+0.071027375 container remove 7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_bhaskara, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:44:31 compute-0 systemd[1]: libpod-conmon-7fcc81857266aac4d246e815b75e1648db53b03352535cbd381ce6b3bd23a668.scope: Deactivated successfully.
Dec 13 03:44:31 compute-0 sudo[81871]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:31 compute-0 sudo[84905]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:31 compute-0 sudo[84905]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:31 compute-0 sudo[84905]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:31 compute-0 sudo[84930]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:44:31 compute-0 sudo[84930]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:32 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.314062712 +0000 UTC m=+0.058483922 container create dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:44:32 compute-0 systemd[1]: Started libpod-conmon-dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2.scope.
Dec 13 03:44:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.28876786 +0000 UTC m=+0.033189160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.389912239 +0000 UTC m=+0.134333459 container init dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.398711489 +0000 UTC m=+0.143132699 container start dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.401932577 +0000 UTC m=+0.146353807 container attach dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:32 compute-0 friendly_ishizaka[84984]: 167 167
Dec 13 03:44:32 compute-0 systemd[1]: libpod-dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2.scope: Deactivated successfully.
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.404416235 +0000 UTC m=+0.148837455 container died dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:44:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-90c7bec7ee7b78a9c5768a0905add37cbe2adb3a75786896313d2e45d5854ada-merged.mount: Deactivated successfully.
Dec 13 03:44:32 compute-0 podman[84967]: 2025-12-13 03:44:32.441275034 +0000 UTC m=+0.185696244 container remove dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_ishizaka, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:44:32 compute-0 systemd[1]: libpod-conmon-dd27e5c1981415f46ef26287750b480de0c252f2935f65f168c0b4be2b4e39b2.scope: Deactivated successfully.
Dec 13 03:44:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:32 compute-0 podman[85008]: 2025-12-13 03:44:32.634618368 +0000 UTC m=+0.053918837 container create 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:32 compute-0 systemd[1]: Started libpod-conmon-163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f.scope.
Dec 13 03:44:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086a9283c6632c87f506ad2f92680cbf847ec6d9610f93c813c8abc3616488a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086a9283c6632c87f506ad2f92680cbf847ec6d9610f93c813c8abc3616488a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086a9283c6632c87f506ad2f92680cbf847ec6d9610f93c813c8abc3616488a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/086a9283c6632c87f506ad2f92680cbf847ec6d9610f93c813c8abc3616488a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:32 compute-0 podman[85008]: 2025-12-13 03:44:32.612529723 +0000 UTC m=+0.031830202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:32 compute-0 podman[85008]: 2025-12-13 03:44:32.714266609 +0000 UTC m=+0.133567078 container init 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:44:32 compute-0 podman[85008]: 2025-12-13 03:44:32.731602863 +0000 UTC m=+0.150903312 container start 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:32 compute-0 podman[85008]: 2025-12-13 03:44:32.734346008 +0000 UTC m=+0.153646497 container attach 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:32 compute-0 laughing_boyd[85025]: {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     "0": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "devices": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "/dev/loop3"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             ],
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_name": "ceph_lv0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_size": "21470642176",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "name": "ceph_lv0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "tags": {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_name": "ceph",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.crush_device_class": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.encrypted": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.objectstore": "bluestore",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_id": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.vdo": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.with_tpm": "0"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             },
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "vg_name": "ceph_vg0"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         }
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     ],
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     "1": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "devices": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "/dev/loop4"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             ],
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_name": "ceph_lv1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_size": "21470642176",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "name": "ceph_lv1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "tags": {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_name": "ceph",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.crush_device_class": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.encrypted": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.objectstore": "bluestore",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_id": "1",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.vdo": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.with_tpm": "0"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             },
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "vg_name": "ceph_vg1"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         }
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     ],
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     "2": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "devices": [
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "/dev/loop5"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             ],
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_name": "ceph_lv2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_size": "21470642176",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "name": "ceph_lv2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "tags": {
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.cluster_name": "ceph",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.crush_device_class": "",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.encrypted": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.objectstore": "bluestore",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osd_id": "2",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.vdo": "0",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:                 "ceph.with_tpm": "0"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             },
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "type": "block",
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:             "vg_name": "ceph_vg2"
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:         }
Dec 13 03:44:32 compute-0 laughing_boyd[85025]:     ]
Dec 13 03:44:32 compute-0 laughing_boyd[85025]: }
Dec 13 03:44:33 compute-0 systemd[1]: libpod-163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f.scope: Deactivated successfully.
Dec 13 03:44:33 compute-0 podman[85008]: 2025-12-13 03:44:33.021774218 +0000 UTC m=+0.441074677 container died 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-086a9283c6632c87f506ad2f92680cbf847ec6d9610f93c813c8abc3616488a0-merged.mount: Deactivated successfully.
Dec 13 03:44:33 compute-0 podman[85008]: 2025-12-13 03:44:33.062976615 +0000 UTC m=+0.482277104 container remove 163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=laughing_boyd, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:44:33 compute-0 systemd[1]: libpod-conmon-163d07c5453842b11785f2b15aead8afdb35bb31a7b563a0f9d51dab4d310f7f.scope: Deactivated successfully.
Dec 13 03:44:33 compute-0 sudo[84930]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Dec 13 03:44:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 03:44:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:33 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Dec 13 03:44:33 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Dec 13 03:44:33 compute-0 sudo[85046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:33 compute-0 sudo[85046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:33 compute-0 sudo[85046]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:33 compute-0 sudo[85071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:33 compute-0 sudo[85071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:33 compute-0 ceph-mon[75071]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Dec 13 03:44:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:33 compute-0 ceph-mon[75071]: Deploying daemon osd.0 on compute-0
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.714571134 +0000 UTC m=+0.052443067 container create 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:33 compute-0 systemd[1]: Started libpod-conmon-53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200.scope.
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.686844895 +0000 UTC m=+0.024716898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.858018871 +0000 UTC m=+0.195890834 container init 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.864031956 +0000 UTC m=+0.201903889 container start 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:33 compute-0 condescending_boyd[85151]: 167 167
Dec 13 03:44:33 compute-0 systemd[1]: libpod-53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200.scope: Deactivated successfully.
Dec 13 03:44:33 compute-0 conmon[85151]: conmon 53406a36ea624a3be794 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200.scope/container/memory.events
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.873176986 +0000 UTC m=+0.211048919 container attach 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.874131773 +0000 UTC m=+0.212003736 container died 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:44:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ff20ca545c0ed777b4f2229a96888e6f0f561efc3112909ca9b858ab24cd49d-merged.mount: Deactivated successfully.
Dec 13 03:44:33 compute-0 podman[85135]: 2025-12-13 03:44:33.927123524 +0000 UTC m=+0.264995447 container remove 53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_boyd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:44:33 compute-0 systemd[1]: libpod-conmon-53406a36ea624a3be7948808098026f4eb003e1368068b765d5f6d8065b31200.scope: Deactivated successfully.
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.165440598 +0000 UTC m=+0.021841879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:34 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.58742865 +0000 UTC m=+0.443829951 container create eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:34 compute-0 systemd[1]: Started libpod-conmon-eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6.scope.
Dec 13 03:44:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.783962091 +0000 UTC m=+0.640363372 container init eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.792359811 +0000 UTC m=+0.648761072 container start eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.799671471 +0000 UTC m=+0.656072732 container attach eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:44:34 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test[85197]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 03:44:34 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test[85197]:                             [--no-systemd] [--no-tmpfs]
Dec 13 03:44:34 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test[85197]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 03:44:34 compute-0 systemd[1]: libpod-eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6.scope: Deactivated successfully.
Dec 13 03:44:34 compute-0 podman[85181]: 2025-12-13 03:44:34.975130855 +0000 UTC m=+0.831532116 container died eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-497914630e79e7f889b3f6e7c03f1d87a5f13d6d776f40e27485b3bf5bd3e6b5-merged.mount: Deactivated successfully.
Dec 13 03:44:35 compute-0 podman[85181]: 2025-12-13 03:44:35.020785554 +0000 UTC m=+0.877186815 container remove eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:44:35 compute-0 systemd[1]: libpod-conmon-eeb7cbb8e9a0660b56f1b35a13a66734088f320ca3d04dc8edc82d869d88f0c6.scope: Deactivated successfully.
Dec 13 03:44:35 compute-0 systemd[1]: Reloading.
Dec 13 03:44:35 compute-0 systemd-sysv-generator[85263]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:35 compute-0 systemd-rc-local-generator[85259]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:35 compute-0 systemd[1]: Reloading.
Dec 13 03:44:35 compute-0 systemd-rc-local-generator[85303]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:35 compute-0 systemd-sysv-generator[85306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:35 compute-0 ceph-mon[75071]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:35 compute-0 systemd[1]: Starting Ceph osd.0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:35 compute-0 podman[85358]: 2025-12-13 03:44:35.96814374 +0000 UTC m=+0.037295981 container create e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:36 compute-0 podman[85358]: 2025-12-13 03:44:36.03168489 +0000 UTC m=+0.100837151 container init e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:36 compute-0 podman[85358]: 2025-12-13 03:44:36.039466133 +0000 UTC m=+0.108618384 container start e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:44:36 compute-0 podman[85358]: 2025-12-13 03:44:36.042543868 +0000 UTC m=+0.111696129 container attach e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:36 compute-0 podman[85358]: 2025-12-13 03:44:35.952629576 +0000 UTC m=+0.021781827 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:36 compute-0 lvm[85458]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:36 compute-0 lvm[85459]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:36 compute-0 lvm[85458]: VG ceph_vg0 finished
Dec 13 03:44:36 compute-0 lvm[85459]: VG ceph_vg1 finished
Dec 13 03:44:36 compute-0 lvm[85461]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:36 compute-0 lvm[85461]: VG ceph_vg2 finished
Dec 13 03:44:36 compute-0 lvm[85463]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:36 compute-0 lvm[85463]: VG ceph_vg2 finished
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:36 compute-0 bash[85358]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:36 compute-0 lvm[85466]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:36 compute-0 lvm[85466]: VG ceph_vg2 finished
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:36 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 03:44:36 compute-0 bash[85358]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Dec 13 03:44:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 bash[85358]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 bash[85358]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 03:44:37 compute-0 bash[85358]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Dec 13 03:44:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:37 compute-0 bash[85358]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Dec 13 03:44:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate[85373]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 03:44:37 compute-0 bash[85358]: --> ceph-volume lvm activate successful for osd ID: 0
Dec 13 03:44:37 compute-0 systemd[1]: libpod-e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709.scope: Deactivated successfully.
Dec 13 03:44:37 compute-0 systemd[1]: libpod-e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709.scope: Consumed 1.603s CPU time.
Dec 13 03:44:37 compute-0 podman[85575]: 2025-12-13 03:44:37.180643235 +0000 UTC m=+0.031184424 container died e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:44:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-c99ea169c720717cd32424f36c5be02c7dcc8f7e9d68b622d401c80b0bbed592-merged.mount: Deactivated successfully.
Dec 13 03:44:37 compute-0 podman[85575]: 2025-12-13 03:44:37.240087803 +0000 UTC m=+0.090629012 container remove e6ea0d5060647b939c8fa45874288f57a1620b84d4799f13505b27abdcf3e709 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:44:37 compute-0 podman[85633]: 2025-12-13 03:44:37.495439393 +0000 UTC m=+0.057597067 container create 9d036f4f40baf06791e0800130a3369e8d5ba41f31fe5ca84638627baf9be843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d7a7de3a29b164e73333527baad0ac854e48dae2291593f44d658ce97a9d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d7a7de3a29b164e73333527baad0ac854e48dae2291593f44d658ce97a9d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d7a7de3a29b164e73333527baad0ac854e48dae2291593f44d658ce97a9d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d7a7de3a29b164e73333527baad0ac854e48dae2291593f44d658ce97a9d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0d7a7de3a29b164e73333527baad0ac854e48dae2291593f44d658ce97a9d74/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:37 compute-0 podman[85633]: 2025-12-13 03:44:37.559562219 +0000 UTC m=+0.121719923 container init 9d036f4f40baf06791e0800130a3369e8d5ba41f31fe5ca84638627baf9be843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:44:37 compute-0 podman[85633]: 2025-12-13 03:44:37.474680895 +0000 UTC m=+0.036838609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:37 compute-0 podman[85633]: 2025-12-13 03:44:37.569285246 +0000 UTC m=+0.131442940 container start 9d036f4f40baf06791e0800130a3369e8d5ba41f31fe5ca84638627baf9be843 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:44:37 compute-0 bash[85633]: 9d036f4f40baf06791e0800130a3369e8d5ba41f31fe5ca84638627baf9be843
Dec 13 03:44:37 compute-0 systemd[1]: Started Ceph osd.0 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:37 compute-0 ceph-osd[85653]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:44:37 compute-0 ceph-osd[85653]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 03:44:37 compute-0 ceph-osd[85653]: pidfile_write: ignore empty --pid-file
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 sudo[85071]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Dec 13 03:44:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 03:44:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:37 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Dec 13 03:44:37 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 sudo[85667]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 sudo[85667]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:37 compute-0 sudo[85667]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:37 compute-0 ceph-mon[75071]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Dec 13 03:44:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24400 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff24000 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 sudo[85698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:37 compute-0 sudo[85698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:37 compute-0 ceph-osd[85653]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Dec 13 03:44:37 compute-0 ceph-osd[85653]: load: jerasure load: lrc 
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 03:44:37 compute-0 ceph-osd[85653]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1eff25c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluefs mount shared_bdev_used = 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: Git sha 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: DB SUMMARY
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: DB Session ID:  VIZYC9ZAY2N8102NH9GE
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                                     Options.env: 0x55a1efdb5ea0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                                Options.info_log: 0x55a1f0e068a0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                    Options.write_buffer_manager: 0x55a1efe1ab40
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: Compression algorithms supported:
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:37 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0b80bca5-8f1f-4571-a61f-2432793b0f25
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478001026, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478002855, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: freelist init
Dec 13 03:44:38 compute-0 ceph-osd[85653]: freelist _read_cfg
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs umount
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) close
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bdev(0x55a1f0bbb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluefs mount shared_bdev_used = 27262976
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Git sha 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: DB SUMMARY
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: DB Session ID:  VIZYC9ZAY2N8102NH9GF
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                                     Options.env: 0x55a1f0fd6a80
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                                Options.info_log: 0x55a1f0e06960
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.write_buffer_manager: 0x55a1efe1b900
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Compression algorithms supported:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e06bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb98d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e070c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e070c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a1f0e070c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55a1efdb9a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0b80bca5-8f1f-4571-a61f-2432793b0f25
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478067986, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478073503, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597478, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b80bca5-8f1f-4571-a61f-2432793b0f25", "db_session_id": "VIZYC9ZAY2N8102NH9GF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478077367, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597478, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b80bca5-8f1f-4571-a61f-2432793b0f25", "db_session_id": "VIZYC9ZAY2N8102NH9GF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478080505, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597478, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0b80bca5-8f1f-4571-a61f-2432793b0f25", "db_session_id": "VIZYC9ZAY2N8102NH9GF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597478082338, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55a1f1020000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: DB pointer 0x55a1f0fc0000
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Dec 13 03:44:38 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:44:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:44:38 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 03:44:38 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 03:44:38 compute-0 ceph-osd[85653]: _get_class not permitted to load lua
Dec 13 03:44:38 compute-0 ceph-osd[85653]: _get_class not permitted to load sdk
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 load_pgs
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 load_pgs opened 0 pgs
Dec 13 03:44:38 compute-0 ceph-osd[85653]: osd.0 0 log_to_monitors true
Dec 13 03:44:38 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0[85649]: 2025-12-13T03:44:38.108+0000 7f4c2afa08c0 -1 osd.0 0 log_to_monitors true
Dec 13 03:44:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Dec 13 03:44:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.229609554 +0000 UTC m=+0.046176166 container create a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:44:38 compute-0 systemd[1]: Started libpod-conmon-a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462.scope.
Dec 13 03:44:38 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.208164276 +0000 UTC m=+0.024730918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.314503028 +0000 UTC m=+0.131069650 container init a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.328278145 +0000 UTC m=+0.144844807 container start a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.33285996 +0000 UTC m=+0.149426602 container attach a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:38 compute-0 affectionate_khorana[86206]: 167 167
Dec 13 03:44:38 compute-0 systemd[1]: libpod-a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462.scope: Deactivated successfully.
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.336424258 +0000 UTC m=+0.152990900 container died a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-07bbd19611d3c314e7b36762260ad13367ca161851e9b5ab48fbb5e4170bc96d-merged.mount: Deactivated successfully.
Dec 13 03:44:38 compute-0 podman[86190]: 2025-12-13 03:44:38.375897709 +0000 UTC m=+0.192464331 container remove a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_khorana, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:38 compute-0 systemd[1]: libpod-conmon-a3987f8cfee7d8f25b5d016198f98c1a74f9a555e06fe13c15cfd3daddaeb462.scope: Deactivated successfully.
Dec 13 03:44:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:38 compute-0 podman[86236]: 2025-12-13 03:44:38.69046095 +0000 UTC m=+0.103428302 container create c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:38 compute-0 podman[86236]: 2025-12-13 03:44:38.613204245 +0000 UTC m=+0.026171617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Dec 13 03:44:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:38 compute-0 systemd[1]: Started libpod-conmon-c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56.scope.
Dec 13 03:44:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 03:44:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 03:44:39 compute-0 ceph-mon[75071]: Deploying daemon osd.1 on compute-0
Dec 13 03:44:39 compute-0 ceph-mon[75071]: from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Dec 13 03:44:39 compute-0 podman[86236]: 2025-12-13 03:44:39.397292231 +0000 UTC m=+0.810259603 container init c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Dec 13 03:44:39 compute-0 podman[86236]: 2025-12-13 03:44:39.404057657 +0000 UTC m=+0.817025019 container start c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:39 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:39 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:39 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:39 compute-0 podman[86236]: 2025-12-13 03:44:39.409583938 +0000 UTC m=+0.822551310 container attach c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:44:39 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test[86252]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 03:44:39 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test[86252]:                             [--no-systemd] [--no-tmpfs]
Dec 13 03:44:39 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test[86252]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 03:44:39 compute-0 systemd[1]: libpod-c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56.scope: Deactivated successfully.
Dec 13 03:44:39 compute-0 podman[86236]: 2025-12-13 03:44:39.661087134 +0000 UTC m=+1.074054496 container died c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:44:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-5777762eec1fa9c4754424cd670d26e394ae3a7d05ea57d4ceab674e86973827-merged.mount: Deactivated successfully.
Dec 13 03:44:39 compute-0 podman[86236]: 2025-12-13 03:44:39.706685362 +0000 UTC m=+1.119652714 container remove c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:39 compute-0 systemd[1]: libpod-conmon-c7b25d819f081c3aeb44223de2e18185411212293b969a72622e1ef662252c56.scope: Deactivated successfully.
Dec 13 03:44:39 compute-0 systemd[1]: Reloading.
Dec 13 03:44:40 compute-0 systemd-rc-local-generator[86316]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:40 compute-0 systemd-sysv-generator[86320]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:40 compute-0 systemd[1]: Reloading.
Dec 13 03:44:40 compute-0 systemd-rc-local-generator[86358]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:40 compute-0 ceph-mon[75071]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:40 compute-0 ceph-mon[75071]: from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Dec 13 03:44:40 compute-0 ceph-mon[75071]: osdmap e7: 3 total, 0 up, 3 in
Dec 13 03:44:40 compute-0 ceph-mon[75071]: from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:40 compute-0 systemd-sysv-generator[86361]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 done with init, starting boot process
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 start_boot
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 03:44:40 compute-0 ceph-osd[85653]: osd.0 0  bench count 12288000 bsize 4 KiB
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:44:40
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:44:40 compute-0 ceph-mgr[75360]: [balancer INFO root] No pools available
Dec 13 03:44:40 compute-0 systemd[1]: Starting Ceph osd.1 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:40 compute-0 podman[86409]: 2025-12-13 03:44:40.831495316 +0000 UTC m=+0.051018737 container create bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:44:40 compute-0 podman[86409]: 2025-12-13 03:44:40.802849622 +0000 UTC m=+0.022373103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:40 compute-0 podman[86409]: 2025-12-13 03:44:40.97956032 +0000 UTC m=+0.199083841 container init bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:40 compute-0 podman[86409]: 2025-12-13 03:44:40.987063466 +0000 UTC m=+0.206586897 container start bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:44:41 compute-0 podman[86409]: 2025-12-13 03:44:41.007351851 +0000 UTC m=+0.226875372 container attach bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:44:41 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:41 compute-0 bash[86409]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:41 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:41 compute-0 bash[86409]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:41 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:41 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:41 compute-0 ceph-mon[75071]: purged_snaps scrub starts
Dec 13 03:44:41 compute-0 ceph-mon[75071]: purged_snaps scrub ok
Dec 13 03:44:41 compute-0 ceph-mon[75071]: from='osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:41 compute-0 ceph-mon[75071]: osdmap e8: 3 total, 0 up, 3 in
Dec 13 03:44:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:41 compute-0 ceph-mon[75071]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:41 compute-0 lvm[86511]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:41 compute-0 lvm[86511]: VG ceph_vg1 finished
Dec 13 03:44:41 compute-0 lvm[86509]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:41 compute-0 lvm[86509]: VG ceph_vg0 finished
Dec 13 03:44:41 compute-0 lvm[86514]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:41 compute-0 lvm[86514]: VG ceph_vg2 finished
Dec 13 03:44:41 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:41 compute-0 bash[86409]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:41 compute-0 bash[86409]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:41 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:42 compute-0 bash[86409]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Dec 13 03:44:42 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate[86425]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 03:44:42 compute-0 bash[86409]: --> ceph-volume lvm activate successful for osd ID: 1
Dec 13 03:44:42 compute-0 systemd[1]: libpod-bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42.scope: Deactivated successfully.
Dec 13 03:44:42 compute-0 systemd[1]: libpod-bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42.scope: Consumed 1.763s CPU time.
Dec 13 03:44:42 compute-0 podman[86409]: 2025-12-13 03:44:42.201427271 +0000 UTC m=+1.420950702 container died bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e6b6b11b959627742177d3c021c59c13ef0449d09944c10e9a1055100bdb799-merged.mount: Deactivated successfully.
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:44:42 compute-0 podman[86409]: 2025-12-13 03:44:42.351359396 +0000 UTC m=+1.570882827 container remove bc4fc4795e4f23ccbcd85af6ca701c9f4776f1ed781867ee8fb79525592a0e42 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1-activate, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:42 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:42 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:42 compute-0 podman[86664]: 2025-12-13 03:44:42.556520243 +0000 UTC m=+0.045412904 container create e1d4fc0ce9900e32fce5d40df186cd929ceacc1545b002716c3a04866d7bd148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe6f3b05c3138b5d2966925a6cdb373abc912152c78b86690984601481835b4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe6f3b05c3138b5d2966925a6cdb373abc912152c78b86690984601481835b4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe6f3b05c3138b5d2966925a6cdb373abc912152c78b86690984601481835b4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe6f3b05c3138b5d2966925a6cdb373abc912152c78b86690984601481835b4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfe6f3b05c3138b5d2966925a6cdb373abc912152c78b86690984601481835b4/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:42 compute-0 podman[86664]: 2025-12-13 03:44:42.534804598 +0000 UTC m=+0.023697279 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:42 compute-0 podman[86664]: 2025-12-13 03:44:42.640298017 +0000 UTC m=+0.129190698 container init e1d4fc0ce9900e32fce5d40df186cd929ceacc1545b002716c3a04866d7bd148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:42 compute-0 podman[86664]: 2025-12-13 03:44:42.648325576 +0000 UTC m=+0.137218237 container start e1d4fc0ce9900e32fce5d40df186cd929ceacc1545b002716c3a04866d7bd148 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:42 compute-0 bash[86664]: e1d4fc0ce9900e32fce5d40df186cd929ceacc1545b002716c3a04866d7bd148
Dec 13 03:44:42 compute-0 systemd[1]: Started Ceph osd.1 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:42 compute-0 ceph-osd[86683]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:44:42 compute-0 ceph-osd[86683]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 03:44:42 compute-0 ceph-osd[86683]: pidfile_write: ignore empty --pid-file
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 sudo[85698]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Dec 13 03:44:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 03:44:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Dec 13 03:44:42 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a400 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 sudo[86702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:42 compute-0 sudo[86702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3a000 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 sudo[86702]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:42 compute-0 ceph-osd[86683]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Dec 13 03:44:42 compute-0 ceph-osd[86683]: load: jerasure load: lrc 
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 sudo[86733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:44:42 compute-0 sudo[86733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-osd[86683]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 03:44:42 compute-0 ceph-osd[86683]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:42 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d3d3bc00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount shared_bdev_used = 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Git sha 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DB SUMMARY
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DB Session ID:  O1RFEJZH751A109J470L
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                     Options.env: 0x5637d3bcbea0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                Options.info_log: 0x5637d4c1c8a0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.write_buffer_manager: 0x5637d3c30b40
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Compression algorithms supported:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cc80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65e31db3-e8d2-4bcc-8d2f-31729c21f0bb
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597483065850, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597483068825, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: freelist init
Dec 13 03:44:43 compute-0 ceph-osd[86683]: freelist _read_cfg
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs umount
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) close
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bdev(0x5637d49d1800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluefs mount shared_bdev_used = 27262976
Dec 13 03:44:43 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Git sha 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DB SUMMARY
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DB Session ID:  O1RFEJZH751A109J470K
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                     Options.env: 0x5637d3bcbce0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                Options.info_log: 0x5637d4c69680
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.write_buffer_manager: 0x5637d3c30b40
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Compression algorithms supported:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1cbc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcf8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5637d4c1d0c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5637d3bcfa30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 65e31db3-e8d2-4bcc-8d2f-31729c21f0bb
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597483121001, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:43 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:43 compute-0 podman[87190]: 2025-12-13 03:44:43.285925622 +0000 UTC m=+0.019964738 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:43 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:43 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:44 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597484414238, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597483, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65e31db3-e8d2-4bcc-8d2f-31729c21f0bb", "db_session_id": "O1RFEJZH751A109J470K", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:44 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:44 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:44 compute-0 ceph-mon[75071]: pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Dec 13 03:44:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:44 compute-0 ceph-mon[75071]: Deploying daemon osd.2 on compute-0
Dec 13 03:44:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.463376477 +0000 UTC m=+1.197415563 container create 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597484474007, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597484, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65e31db3-e8d2-4bcc-8d2f-31729c21f0bb", "db_session_id": "O1RFEJZH751A109J470K", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597484482397, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597484, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "65e31db3-e8d2-4bcc-8d2f-31729c21f0bb", "db_session_id": "O1RFEJZH751A109J470K", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:44 compute-0 systemd[1]: Started libpod-conmon-39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7.scope.
Dec 13 03:44:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597484538403, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.565633766 +0000 UTC m=+1.299672872 container init 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.575541268 +0000 UTC m=+1.309580364 container start 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:44:44 compute-0 frosty_franklin[87206]: 167 167
Dec 13 03:44:44 compute-0 systemd[1]: libpod-39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7.scope: Deactivated successfully.
Dec 13 03:44:44 compute-0 conmon[87206]: conmon 39875caccbf1a3919841 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7.scope/container/memory.events
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.618814282 +0000 UTC m=+1.352853398 container attach 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.620800127 +0000 UTC m=+1.354839213 container died 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:44:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-97b895c30a6bc5dcfae448cc300bebc1ba36553b36b60920259aac42c5607c8c-merged.mount: Deactivated successfully.
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5637d4e36000
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: DB pointer 0x5637d4dd6000
Dec 13 03:44:44 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:44 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Dec 13 03:44:44 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:44:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1.6 total, 1.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 8.1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:44:44 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 03:44:44 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 03:44:44 compute-0 ceph-osd[86683]: _get_class not permitted to load lua
Dec 13 03:44:44 compute-0 ceph-osd[86683]: _get_class not permitted to load sdk
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 load_pgs
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 load_pgs opened 0 pgs
Dec 13 03:44:44 compute-0 ceph-osd[86683]: osd.1 0 log_to_monitors true
Dec 13 03:44:44 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1[86679]: 2025-12-13T03:44:44.699+0000 7f768994c8c0 -1 osd.1 0 log_to_monitors true
Dec 13 03:44:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Dec 13 03:44:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 03:44:44 compute-0 podman[87190]: 2025-12-13 03:44:44.723563901 +0000 UTC m=+1.457602987 container remove 39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:44:44 compute-0 systemd[1]: libpod-conmon-39875caccbf1a391984166f7826267a9df796f8f1bf1fd9f81fdee3ec971bfa7.scope: Deactivated successfully.
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.013141528 +0000 UTC m=+0.061323670 container create 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:44.976802673 +0000 UTC m=+0.024984845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:45 compute-0 systemd[1]: Started libpod-conmon-891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd.scope.
Dec 13 03:44:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.142416487 +0000 UTC m=+0.190598649 container init 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.157355137 +0000 UTC m=+0.205537279 container start 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.162459776 +0000 UTC m=+0.210641928 container attach 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:44:45 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test[87285]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Dec 13 03:44:45 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test[87285]:                             [--no-systemd] [--no-tmpfs]
Dec 13 03:44:45 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test[87285]: ceph-volume activate: error: unrecognized arguments: --bad-option
Dec 13 03:44:45 compute-0 systemd[1]: libpod-891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd.scope: Deactivated successfully.
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.373435892 +0000 UTC m=+0.421618034 container died 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:44:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-be12bc04c08a3caf3eccbf6045bdc06970ecc86b38da0c9bb28ac146d48c563c-merged.mount: Deactivated successfully.
Dec 13 03:44:45 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:45 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:45 compute-0 ceph-mon[75071]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:45 compute-0 ceph-mon[75071]: from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Dec 13 03:44:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:45 compute-0 podman[87269]: 2025-12-13 03:44:45.439631324 +0000 UTC m=+0.487813466 container remove 891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:45 compute-0 systemd[1]: libpod-conmon-891ecd2bb98cfff1261633d8c841e3489e1d0dee0a8bd6ed78b9ab089c909bdd.scope: Deactivated successfully.
Dec 13 03:44:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Dec 13 03:44:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 03:44:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: [devicehealth WARNING root] not enough osds to create mgr pool
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 8.302 iops: 2125.415 elapsed_sec: 1.411
Dec 13 03:44:46 compute-0 ceph-osd[85653]: log_channel(cluster) log [WRN] : OSD bench result of 2125.415086 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 0 waiting for initial osdmap
Dec 13 03:44:46 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0[85649]: 2025-12-13T03:44:46.528+0000 7f4c26f22640 -1 osd.0 0 waiting for initial osdmap
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Dec 13 03:44:46 compute-0 ceph-mon[75071]: osdmap e9: 3 total, 0 up, 3 in
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 set_numa_affinity not setting numa affinity
Dec 13 03:44:46 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-0[85649]: 2025-12-13T03:44:46.563+0000 7f4c21d27640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:46 compute-0 ceph-osd[85653]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Dec 13 03:44:46 compute-0 systemd[1]: Reloading.
Dec 13 03:44:46 compute-0 systemd-rc-local-generator[87349]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:46 compute-0 systemd-sysv-generator[87353]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:46 compute-0 systemd[1]: Reloading.
Dec 13 03:44:46 compute-0 systemd-sysv-generator[87395]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:44:46 compute-0 systemd-rc-local-generator[87391]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:44:47 compute-0 systemd[1]: Starting Ceph osd.2 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:44:47 compute-0 podman[87447]: 2025-12-13 03:44:47.414297345 +0000 UTC m=+0.036178091 container create a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/3733804297; not ready for session (expect reconnect)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Dec 13 03:44:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:47 compute-0 podman[87447]: 2025-12-13 03:44:47.479655534 +0000 UTC m=+0.101536300 container init a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:47 compute-0 podman[87447]: 2025-12-13 03:44:47.488663372 +0000 UTC m=+0.110544118 container start a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:47 compute-0 podman[87447]: 2025-12-13 03:44:47.491353846 +0000 UTC m=+0.113234612 container attach a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:44:47 compute-0 podman[87447]: 2025-12-13 03:44:47.399165842 +0000 UTC m=+0.021046608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:47 compute-0 ceph-osd[85653]: osd.0 9 tick checking mon for new map
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 done with init, starting boot process
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 start_boot
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 03:44:47 compute-0 ceph-osd[86683]: osd.1 0  bench count 12288000 bsize 4 KiB
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297] boot
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Dec 13 03:44:47 compute-0 ceph-osd[85653]: osd.0 10 state: booting -> active
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:47 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:47 compute-0 ceph-mon[75071]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Dec 13 03:44:47 compute-0 ceph-mon[75071]: OSD bench result of 2125.415086 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:47 compute-0 ceph-mon[75071]: osd.0 [v2:192.168.122.100:6802/3733804297,v1:192.168.122.100:6803/3733804297] boot
Dec 13 03:44:47 compute-0 ceph-mon[75071]: osdmap e10: 3 total, 1 up, 3 in
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:47 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:47 compute-0 bash[87447]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:47 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:47 compute-0 bash[87447]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:48 compute-0 lvm[87547]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:48 compute-0 lvm[87548]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:48 compute-0 lvm[87547]: VG ceph_vg0 finished
Dec 13 03:44:48 compute-0 lvm[87548]: VG ceph_vg1 finished
Dec 13 03:44:48 compute-0 lvm[87550]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:48 compute-0 lvm[87550]: VG ceph_vg2 finished
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: [devicehealth INFO root] creating mgr pool
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:48 compute-0 bash[87447]: --> Failed to activate via raw: did not find any matching OSD to activate
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/ceph-authtool --gen-print-key
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:48 compute-0 bash[87447]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Dec 13 03:44:48 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate[87462]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 03:44:48 compute-0 bash[87447]: --> ceph-volume lvm activate successful for osd ID: 2
Dec 13 03:44:48 compute-0 systemd[1]: libpod-a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89.scope: Deactivated successfully.
Dec 13 03:44:48 compute-0 podman[87447]: 2025-12-13 03:44:48.592674946 +0000 UTC m=+1.214555692 container died a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:48 compute-0 systemd[1]: libpod-a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89.scope: Consumed 1.599s CPU time.
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Dec 13 03:44:48 compute-0 ceph-mon[75071]: purged_snaps scrub starts
Dec 13 03:44:48 compute-0 ceph-mon[75071]: purged_snaps scrub ok
Dec 13 03:44:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Dec 13 03:44:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Dec 13 03:44:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-945f275dc60089a03b50b3298c2df82b10837217caa0707a49b7ef00e44d1b56-merged.mount: Deactivated successfully.
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 crush map has features 3314933000852226048, adjusting msgr requires
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 crush map has features 288514051259236352, adjusting msgr requires
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:48 compute-0 ceph-osd[85653]: osd.0 11 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 03:44:48 compute-0 ceph-osd[85653]: osd.0 11 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Dec 13 03:44:48 compute-0 ceph-osd[85653]: osd.0 11 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:48 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Dec 13 03:44:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 03:44:48 compute-0 podman[87447]: 2025-12-13 03:44:48.721522034 +0000 UTC m=+1.343402810 container remove a910991c985ad8ad6cfe7e03acc0e6751cd43a49b0a8d9cf970c8011816c4c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:44:48 compute-0 podman[87712]: 2025-12-13 03:44:48.983150466 +0000 UTC m=+0.054574324 container create 404dfe1b382dad18b335f93a4abbc773a40d443870c332c04b92fa9be2646dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:44:49 compute-0 podman[87712]: 2025-12-13 03:44:48.952335263 +0000 UTC m=+0.023759151 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef272b34ce29fd2a8aacd1f00df5c2ff2447b17baadea8732207fe873aca431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef272b34ce29fd2a8aacd1f00df5c2ff2447b17baadea8732207fe873aca431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef272b34ce29fd2a8aacd1f00df5c2ff2447b17baadea8732207fe873aca431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef272b34ce29fd2a8aacd1f00df5c2ff2447b17baadea8732207fe873aca431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef272b34ce29fd2a8aacd1f00df5c2ff2447b17baadea8732207fe873aca431/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:49 compute-0 podman[87712]: 2025-12-13 03:44:49.105833115 +0000 UTC m=+0.177257023 container init 404dfe1b382dad18b335f93a4abbc773a40d443870c332c04b92fa9be2646dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:44:49 compute-0 podman[87712]: 2025-12-13 03:44:49.114218835 +0000 UTC m=+0.185642713 container start 404dfe1b382dad18b335f93a4abbc773a40d443870c332c04b92fa9be2646dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: pidfile_write: ignore empty --pid-file
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc400 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cc000 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Dec 13 03:44:49 compute-0 ceph-osd[87731]: load: jerasure load: lrc 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Dec 13 03:44:49 compute-0 ceph-osd[87731]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e2641cdc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount shared_bdev_used = 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Git sha 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DB SUMMARY
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DB Session ID:  4AWF4CWYMICYZDXJJVO0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                     Options.env: 0x55e26405dea0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                Options.info_log: 0x55e2650b88a0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.write_buffer_manager: 0x55e264f5eb40
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Compression algorithms supported:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:49 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 21f1678e-66d0-4d3f-a7e5-6ec78c22e885
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597489547937, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597489550856, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: freelist init
Dec 13 03:44:49 compute-0 ceph-osd[87731]: freelist _read_cfg
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs umount
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) close
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bdev(0x55e264e6d800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluefs mount shared_bdev_used = 27262976
Dec 13 03:44:49 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: RocksDB version: 7.9.2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Git sha 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Compile date 2025-10-30 15:42:43
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DB SUMMARY
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DB Session ID:  4AWF4CWYMICYZDXJJVO1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: CURRENT file:  CURRENT
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: IDENTITY file:  IDENTITY
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.error_if_exists: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.create_if_missing: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.paranoid_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.flush_verify_memtable_count: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                     Options.env: 0x55e26405dce0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                      Options.fs: LegacyFileSystem
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                Options.info_log: 0x55e2650b8960
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_file_opening_threads: 16
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.statistics: (nil)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.use_fsync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.max_log_file_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.log_file_time_to_roll: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.keep_log_file_num: 1000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.recycle_log_file_num: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.allow_fallocate: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.allow_mmap_reads: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.allow_mmap_writes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.use_direct_reads: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.create_missing_column_families: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.db_log_dir: 
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                                 Options.wal_dir: db.wal
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.table_cache_numshardbits: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                         Options.WAL_ttl_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.WAL_size_limit_MB: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.manifest_preallocation_size: 4194304
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                     Options.is_fd_close_on_exec: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.advise_random_on_open: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.db_write_buffer_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.write_buffer_manager: 0x55e264f5eb40
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.access_hint_on_compaction_start: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                      Options.use_adaptive_mutex: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                            Options.rate_limiter: (nil)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.wal_recovery_mode: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.enable_thread_tracking: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.enable_pipelined_write: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.unordered_write: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.write_thread_max_yield_usec: 100
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.row_cache: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                              Options.wal_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_flush_during_recovery: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.allow_ingest_behind: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.two_write_queues: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.manual_wal_flush: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.wal_compression: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.atomic_flush: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.persist_stats_to_disk: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.write_dbid_to_manifest: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.log_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.best_efforts_recovery: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.allow_data_in_errors: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.db_host_id: __hostname__
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.enforce_single_del_contracts: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_background_jobs: 4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_background_compactions: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_subcompactions: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.writable_file_max_buffer_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.delayed_write_rate : 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.max_total_wal_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.stats_dump_period_sec: 600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.stats_persist_period_sec: 600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.max_open_files: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                      Options.wal_bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.strict_bytes_per_sync: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.compaction_readahead_size: 2097152
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.max_background_flushes: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Compression algorithms supported:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZSTD supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kXpressCompression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kBZip2Compression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZSTDNotFinalCompression supported: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kLZ4Compression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kZlibCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kLZ4HCCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         kSnappyCompression supported: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Fast CRC32 supported: Supported on x86
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: DMutex implementation: pthread_mutex_t
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b8bc0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e2640618d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:           Options.merge_operator: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.compaction_filter_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.sst_partitioner_factory: None
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.memtable_factory: SkipListFactory
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.table_factory: BlockBasedTable
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55e2650b90c0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55e264061a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.write_buffer_size: 16777216
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.max_write_buffer_number: 64
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.compression: LZ4
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression: Disabled
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.num_levels: 7
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:            Options.compression_opts.window_bits: -14
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.level: 32767
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.compression_opts.strategy: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.parallel_threads: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                  Options.compression_opts.enabled: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:              Options.level0_stop_writes_trigger: 36
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.target_file_size_base: 67108864
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:             Options.target_file_size_multiplier: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.arena_block_size: 1048576
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.disable_auto_compactions: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.inplace_update_support: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                 Options.inplace_update_num_locks: 10000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:               Options.memtable_whole_key_filtering: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:   Options.memtable_huge_page_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.bloom_locality: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                    Options.max_successive_merges: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.optimize_filters_for_hits: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.paranoid_file_checks: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.force_consistency_checks: 1
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.report_bg_io_stats: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                               Options.ttl: 2592000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.periodic_compaction_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:    Options.preserve_internal_time_seconds: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                       Options.enable_blob_files: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                           Options.min_blob_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                          Options.blob_file_size: 268435456
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                   Options.blob_compression_type: NoCompression
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.enable_blob_garbage_collection: false
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:          Options.blob_compaction_readahead_size: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb:                Options.blob_file_starting_level: 0
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 21f1678e-66d0-4d3f-a7e5-6ec78c22e885
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597489594947, "job": 1, "event": "recovery_started", "wal_files": [31]}
Dec 13 03:44:49 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Dec 13 03:44:50 compute-0 bash[87712]: 404dfe1b382dad18b335f93a4abbc773a40d443870c332c04b92fa9be2646dad
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597490025178, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597489, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "21f1678e-66d0-4d3f-a7e5-6ec78c22e885", "db_session_id": "4AWF4CWYMICYZDXJJVO1", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:50 compute-0 systemd[1]: Started Ceph osd.2 for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:44:50 compute-0 ceph-mon[75071]: pgmap v34: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Dec 13 03:44:50 compute-0 ceph-mon[75071]: osdmap e11: 3 total, 1 up, 3 in
Dec 13 03:44:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Dec 13 03:44:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597490054455, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597490, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "21f1678e-66d0-4d3f-a7e5-6ec78c22e885", "db_session_id": "4AWF4CWYMICYZDXJJVO1", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597490061139, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597490, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "21f1678e-66d0-4d3f-a7e5-6ec78c22e885", "db_session_id": "4AWF4CWYMICYZDXJJVO1", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597490092455, "job": 1, "event": "recovery_finished"}
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:50 compute-0 sudo[86733]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:50 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:50 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55e2650ba000
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: DB pointer 0x55e265272000
Dec 13 03:44:50 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Dec 13 03:44:50 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Dec 13 03:44:50 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:44:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.6 total, 0.6 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:44:50 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Dec 13 03:44:50 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Dec 13 03:44:50 compute-0 ceph-osd[87731]: _get_class not permitted to load lua
Dec 13 03:44:50 compute-0 ceph-osd[87731]: _get_class not permitted to load sdk
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 load_pgs
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 load_pgs opened 0 pgs
Dec 13 03:44:50 compute-0 ceph-osd[87731]: osd.2 0 log_to_monitors true
Dec 13 03:44:50 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2[87727]: 2025-12-13T03:44:50.219+0000 7ff8fe0138c0 -1 osd.2 0 log_to_monitors true
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 03:44:50 compute-0 sudo[88150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:50 compute-0 sudo[88150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:50 compute-0 sudo[88150]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:50 compute-0 sudo[88208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:44:50 compute-0 sudo[88208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:50 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:50 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.65076197 +0000 UTC m=+0.073130533 container create 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.60471351 +0000 UTC m=+0.027082083 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:50 compute-0 systemd[1]: Started libpod-conmon-1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829.scope.
Dec 13 03:44:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.775280469 +0000 UTC m=+0.197649072 container init 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.782291601 +0000 UTC m=+0.204660154 container start 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:50 compute-0 sleepy_torvalds[88262]: 167 167
Dec 13 03:44:50 compute-0 systemd[1]: libpod-1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829.scope: Deactivated successfully.
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.799142312 +0000 UTC m=+0.221510865 container attach 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.799578575 +0000 UTC m=+0.221947128 container died 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:44:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-4651efc698738f0f540d5ed0054b4a4e869cf40d6103dfde6bb3a788e76f0458-merged.mount: Deactivated successfully.
Dec 13 03:44:50 compute-0 podman[88246]: 2025-12-13 03:44:50.919642121 +0000 UTC m=+0.342010674 container remove 1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_torvalds, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:50 compute-0 systemd[1]: libpod-conmon-1087f4fe315688eacbdf0db92724163188ee19e26f7df8c511a3e71b13078829.scope: Deactivated successfully.
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Dec 13 03:44:51 compute-0 ceph-mon[75071]: osdmap e12: 3 total, 1 up, 3 in
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Dec 13 03:44:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:51 compute-0 podman[88288]: 2025-12-13 03:44:51.116709257 +0000 UTC m=+0.061403502 container create 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:51 compute-0 systemd[1]: Started libpod-conmon-722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066.scope.
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Dec 13 03:44:51 compute-0 podman[88288]: 2025-12-13 03:44:51.07921581 +0000 UTC m=+0.023910065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Dec 13 03:44:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861b5ad00bfc72d47f3cce13c66efedcfdcfad52604060ee55a6471599adf0e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861b5ad00bfc72d47f3cce13c66efedcfdcfad52604060ee55a6471599adf0e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861b5ad00bfc72d47f3cce13c66efedcfdcfad52604060ee55a6471599adf0e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/861b5ad00bfc72d47f3cce13c66efedcfdcfad52604060ee55a6471599adf0e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:51 compute-0 podman[88288]: 2025-12-13 03:44:51.216771147 +0000 UTC m=+0.161465392 container init 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:51 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:51 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:51 compute-0 podman[88288]: 2025-12-13 03:44:51.224940119 +0000 UTC m=+0.169634364 container start 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Dec 13 03:44:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Dec 13 03:44:51 compute-0 podman[88288]: 2025-12-13 03:44:51.248254398 +0000 UTC m=+0.192948663 container attach 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:44:51 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:51 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:51 compute-0 lvm[88383]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:44:51 compute-0 lvm[88383]: VG ceph_vg1 finished
Dec 13 03:44:51 compute-0 lvm[88381]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:44:51 compute-0 lvm[88381]: VG ceph_vg0 finished
Dec 13 03:44:51 compute-0 lvm[88385]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:44:51 compute-0 lvm[88385]: VG ceph_vg2 finished
Dec 13 03:44:52 compute-0 strange_lamarr[88304]: {}
Dec 13 03:44:52 compute-0 systemd[1]: libpod-722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066.scope: Deactivated successfully.
Dec 13 03:44:52 compute-0 systemd[1]: libpod-722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066.scope: Consumed 1.339s CPU time.
Dec 13 03:44:52 compute-0 podman[88288]: 2025-12-13 03:44:52.045121684 +0000 UTC m=+0.989815919 container died 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-861b5ad00bfc72d47f3cce13c66efedcfdcfad52604060ee55a6471599adf0e7-merged.mount: Deactivated successfully.
Dec 13 03:44:52 compute-0 podman[88288]: 2025-12-13 03:44:52.136422533 +0000 UTC m=+1.081116768 container remove 722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_lamarr, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:44:52 compute-0 systemd[1]: libpod-conmon-722f0a113495ebc6cfd60b264ed354d5ca557e629f37559179ec107cc652e066.scope: Deactivated successfully.
Dec 13 03:44:52 compute-0 sudo[88208]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e14 e14: 3 total, 1 up, 3 in
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 done with init, starting boot process
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 start_boot
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Dec 13 03:44:52 compute-0 ceph-osd[87731]: osd.2 0  bench count 12288000 bsize 4 KiB
Dec 13 03:44:52 compute-0 ceph-mon[75071]: pgmap v37: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Dec 13 03:44:52 compute-0 ceph-mon[75071]: osdmap e13: 3 total, 1 up, 3 in
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 1 up, 3 in
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:52 compute-0 sudo[88401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:44:52 compute-0 sudo[88401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:52 compute-0 sudo[88401]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:52 compute-0 sudo[88426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:52 compute-0 sudo[88426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:52 compute-0 sudo[88426]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:52 compute-0 sudo[88451]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:44:52 compute-0 sudo[88451]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 22.741 iops: 5821.665 elapsed_sec: 0.515
Dec 13 03:44:52 compute-0 ceph-osd[86683]: log_channel(cluster) log [WRN] : OSD bench result of 5821.665412 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 0 waiting for initial osdmap
Dec 13 03:44:52 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1[86679]: 2025-12-13T03:44:52.431+0000 7f76858ce640 -1 osd.1 0 waiting for initial osdmap
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 set_numa_affinity not setting numa affinity
Dec 13 03:44:52 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-1[86679]: 2025-12-13T03:44:52.476+0000 7f76806d3640 -1 osd.1 14 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:52 compute-0 ceph-osd[86683]: osd.1 14 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/1295959294; not ready for session (expect reconnect)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:52 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Dec 13 03:44:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:52 compute-0 podman[88519]: 2025-12-13 03:44:52.821724805 +0000 UTC m=+0.076034743 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:52 compute-0 podman[88519]: 2025-12-13 03:44:52.91652785 +0000 UTC m=+0.170837788 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:44:53 compute-0 sudo[88605]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyirnkchpgzwnfwoqgipjajpmzyurztr ; /usr/bin/python3'
Dec 13 03:44:53 compute-0 sudo[88605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:53 compute-0 python3[88616]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Dec 13 03:44:53 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:53 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Dec 13 03:44:53 compute-0 ceph-mon[75071]: purged_snaps scrub starts
Dec 13 03:44:53 compute-0 ceph-mon[75071]: purged_snaps scrub ok
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Dec 13 03:44:53 compute-0 ceph-mon[75071]: osdmap e14: 3 total, 1 up, 3 in
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:53 compute-0 ceph-mon[75071]: pgmap v40: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Dec 13 03:44:53 compute-0 ceph-mon[75071]: OSD bench result of 5821.665412 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:53 compute-0 ceph-osd[86683]: osd.1 15 state: booting -> active
Dec 13 03:44:53 compute-0 podman[88654]: 2025-12-13 03:44:53.338093082 +0000 UTC m=+0.074064979 container create 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294] boot
Dec 13 03:44:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Dec 13 03:44:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:53 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:53 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:53 compute-0 podman[88654]: 2025-12-13 03:44:53.290481389 +0000 UTC m=+0.026453316 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:53 compute-0 systemd[1]: Started libpod-conmon-1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033.scope.
Dec 13 03:44:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350e74c6942fac49b499a50e6cdc06028fc537fb633a791c9906d1aec25557fa/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350e74c6942fac49b499a50e6cdc06028fc537fb633a791c9906d1aec25557fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/350e74c6942fac49b499a50e6cdc06028fc537fb633a791c9906d1aec25557fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:53 compute-0 podman[88654]: 2025-12-13 03:44:53.767313963 +0000 UTC m=+0.503285890 container init 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:53 compute-0 podman[88654]: 2025-12-13 03:44:53.774535271 +0000 UTC m=+0.510507168 container start 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:53 compute-0 podman[88654]: 2025-12-13 03:44:53.799804643 +0000 UTC m=+0.535776540 container attach 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:44:53 compute-0 sudo[88451]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:54 compute-0 sudo[88729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:54 compute-0 sudo[88729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:54 compute-0 sudo[88729]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:54 compute-0 sudo[88754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- inventory --format=json-pretty --filter-for-batch
Dec 13 03:44:54 compute-0 sudo[88754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:54 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:54 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/788463479' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:44:54 compute-0 vibrant_wu[88688]: 
Dec 13 03:44:54 compute-0 vibrant_wu[88688]: {"fsid":"437a9f04-06b7-56e3-8a4b-f52a1199dd32","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":99,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":15,"num_osds":3,"num_up_osds":2,"osd_up_since":1765597493,"num_in_osds":3,"osd_in_since":1765597469,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447008768,"bytes_avail":21023633408,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2025-12-13T03:43:11:652968+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T03:44:42.424010+0000","services":{}},"progress_events":{}}
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Dec 13 03:44:54 compute-0 systemd[1]: libpod-1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033.scope: Deactivated successfully.
Dec 13 03:44:54 compute-0 podman[88654]: 2025-12-13 03:44:54.333976986 +0000 UTC m=+1.069948883 container died 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:54 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:54 compute-0 ceph-mon[75071]: osd.1 [v2:192.168.122.100:6806/1295959294,v1:192.168.122.100:6807/1295959294] boot
Dec 13 03:44:54 compute-0 ceph-mon[75071]: osdmap e15: 3 total, 2 up, 3 in
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 16 pg[1.0( empty local-lis/les=15/16 n=0 ec=11/11 lis/c=0/0 les/c/f=0/0/0 sis=15) [1] r=0 lpr=15 pi=[11,15)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-350e74c6942fac49b499a50e6cdc06028fc537fb633a791c9906d1aec25557fa-merged.mount: Deactivated successfully.
Dec 13 03:44:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:44:54 compute-0 podman[88654]: 2025-12-13 03:44:54.47543846 +0000 UTC m=+1.211410357 container remove 1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033 (image=quay.io/ceph/ceph:v20, name=vibrant_wu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:44:54 compute-0 sudo[88605]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:54 compute-0 systemd[1]: libpod-conmon-1d514047a3a9941985ddc985577138eb7f78a30e756f91b10b02a3112db11033.scope: Deactivated successfully.
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.571189081 +0000 UTC m=+0.058988286 container create e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:44:54 compute-0 systemd[1]: Started libpod-conmon-e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051.scope.
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.541668702 +0000 UTC m=+0.029467937 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.676687269 +0000 UTC m=+0.164486504 container init e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.684184264 +0000 UTC m=+0.171983469 container start e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:44:54 compute-0 vigorous_pare[88821]: 167 167
Dec 13 03:44:54 compute-0 systemd[1]: libpod-e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051.scope: Deactivated successfully.
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.705175949 +0000 UTC m=+0.192975174 container attach e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.705607751 +0000 UTC m=+0.193406966 container died e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-009e26d1155e96f6802cbadbceb04e148a6760c8d365c69802cbaa2b893c25dc-merged.mount: Deactivated successfully.
Dec 13 03:44:54 compute-0 sudo[88861]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqwzexznnxvrqrhoozmlonjnxlxbzriy ; /usr/bin/python3'
Dec 13 03:44:54 compute-0 sudo[88861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:54 compute-0 podman[88804]: 2025-12-13 03:44:54.81990242 +0000 UTC m=+0.307701615 container remove e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_pare, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:54 compute-0 systemd[1]: libpod-conmon-e3602c6da536bd7feb33c5886fe5d88e241f0d97a6d5860d8fca030cbdc0d051.scope: Deactivated successfully.
Dec 13 03:44:54 compute-0 python3[88863]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:55.040997103 +0000 UTC m=+0.108473341 container create df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:54.954487095 +0000 UTC m=+0.021963363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:55 compute-0 systemd[1]: Started libpod-conmon-df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa.scope.
Dec 13 03:44:55 compute-0 podman[88883]: 2025-12-13 03:44:55.141668619 +0000 UTC m=+0.155118438 container create 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41632e241cdc07067245b467f7a2db96026987b6cc115b99eb757de2be8cc95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41632e241cdc07067245b467f7a2db96026987b6cc115b99eb757de2be8cc95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41632e241cdc07067245b467f7a2db96026987b6cc115b99eb757de2be8cc95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41632e241cdc07067245b467f7a2db96026987b6cc115b99eb757de2be8cc95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 systemd[1]: Started libpod-conmon-6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843.scope.
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:55.195975206 +0000 UTC m=+0.263451474 container init df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:55.202879884 +0000 UTC m=+0.270356122 container start df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:44:55 compute-0 podman[88883]: 2025-12-13 03:44:55.112369967 +0000 UTC m=+0.125819786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60ed88dbd432d9d7c15422848be402ecc2c6d4ea21f8676515864027923f115/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c60ed88dbd432d9d7c15422848be402ecc2c6d4ea21f8676515864027923f115/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:55.22938277 +0000 UTC m=+0.296859038 container attach df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:44:55 compute-0 podman[88883]: 2025-12-13 03:44:55.269526669 +0000 UTC m=+0.282976478 container init 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:44:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:55 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:55 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:55 compute-0 podman[88883]: 2025-12-13 03:44:55.277277212 +0000 UTC m=+0.290727061 container start 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:44:55 compute-0 podman[88883]: 2025-12-13 03:44:55.293347292 +0000 UTC m=+0.306797131 container attach 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:44:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Dec 13 03:44:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Dec 13 03:44:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Dec 13 03:44:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:55 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/788463479' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:44:55 compute-0 ceph-mon[75071]: osdmap e16: 3 total, 2 up, 3 in
Dec 13 03:44:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:55 compute-0 ceph-mon[75071]: pgmap v43: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:44:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:55 compute-0 ceph-mon[75071]: osdmap e17: 3 total, 2 up, 3 in
Dec 13 03:44:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]: [
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:     {
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "available": false,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "being_replaced": false,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "ceph_device_lvm": false,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "lsm_data": {},
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "lvs": [],
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "path": "/dev/sr0",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "rejected_reasons": [
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "Insufficient space (<5GB)",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "Has a FileSystem"
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         ],
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         "sys_api": {
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "actuators": null,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "device_nodes": [
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:                 "sr0"
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             ],
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "devname": "sr0",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "human_readable_size": "482.00 KB",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "id_bus": "ata",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "model": "QEMU DVD-ROM",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "nr_requests": "2",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "parent": "/dev/sr0",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "partitions": {},
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "path": "/dev/sr0",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "removable": "1",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "rev": "2.5+",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "ro": "0",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "rotational": "1",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "sas_address": "",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "sas_device_handle": "",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "scheduler_mode": "mq-deadline",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "sectors": 0,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "sectorsize": "2048",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "size": 493568.0,
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "support_discard": "2048",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "type": "disk",
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:             "vendor": "QEMU"
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:         }
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]:     }
Dec 13 03:44:55 compute-0 hopeful_sinoussi[88900]: ]
Dec 13 03:44:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:44:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2260105027' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:44:55 compute-0 systemd[1]: libpod-df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa.scope: Deactivated successfully.
Dec 13 03:44:55 compute-0 podman[88871]: 2025-12-13 03:44:55.719055407 +0000 UTC m=+0.786531645 container died df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-41632e241cdc07067245b467f7a2db96026987b6cc115b99eb757de2be8cc95b-merged.mount: Deactivated successfully.
Dec 13 03:44:55 compute-0 ceph-mgr[75360]: [devicehealth INFO root] creating main.db for devicehealth
Dec 13 03:44:56 compute-0 podman[88871]: 2025-12-13 03:44:56.019404039 +0000 UTC m=+1.086880277 container remove df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:44:56 compute-0 systemd[1]: libpod-conmon-df0a751bd7cb8b66455ca7bbce965fde72e7cc46f40235cc8acee235dcbc1dfa.scope: Deactivated successfully.
Dec 13 03:44:56 compute-0 sudo[88754]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43686k
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43686k
Dec 13 03:44:56 compute-0 sudo[89606]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Dec 13 03:44:56 compute-0 sudo[89606]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 03:44:56 compute-0 sudo[89606]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Dec 13 03:44:56 compute-0 sudo[89606]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:56 compute-0 sudo[89609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:56 compute-0 sudo[89609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:56 compute-0 sudo[89609]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:56 compute-0 sudo[89634]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:44:56 compute-0 sudo[89634]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:44:56 compute-0 podman[89672]: 2025-12-13 03:44:56.620778453 +0000 UTC m=+0.023800273 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2260105027' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: Adjusting osd_memory_target on compute-0 to 43686k
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Dec 13 03:44:57 compute-0 ceph-mon[75071]: Unable to set osd_memory_target on compute-0 to 44734464: error parsing value: Value '44734464' is below minimum 939524096
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:44:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.115819205 +0000 UTC m=+0.518840975 container create 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:44:57 compute-0 systemd[1]: Started libpod-conmon-81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f.scope.
Dec 13 03:44:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2260105027' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:44:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Dec 13 03:44:57 compute-0 intelligent_tharp[88906]: pool 'vms' created
Dec 13 03:44:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Dec 13 03:44:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:57 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:57 compute-0 podman[88883]: 2025-12-13 03:44:57.198197611 +0000 UTC m=+2.211647440 container died 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:44:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:57 compute-0 systemd[1]: libpod-6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843.scope: Deactivated successfully.
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.240309494 +0000 UTC m=+0.643331284 container init 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.249124285 +0000 UTC m=+0.652146055 container start 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:57 compute-0 vibrant_napier[89689]: 167 167
Dec 13 03:44:57 compute-0 systemd[1]: libpod-81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f.scope: Deactivated successfully.
Dec 13 03:44:57 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.271721354 +0000 UTC m=+0.674743124 container attach 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.272079363 +0000 UTC m=+0.675101133 container died 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:57 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-fddd5fc5ecd76cdc566b65fecd2a7bff20fee3904a6dcf7787f3406ff3a8e17d-merged.mount: Deactivated successfully.
Dec 13 03:44:57 compute-0 podman[89672]: 2025-12-13 03:44:57.382102445 +0000 UTC m=+0.785124205 container remove 81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_napier, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:44:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-c60ed88dbd432d9d7c15422848be402ecc2c6d4ea21f8676515864027923f115-merged.mount: Deactivated successfully.
Dec 13 03:44:57 compute-0 podman[88883]: 2025-12-13 03:44:57.482687189 +0000 UTC m=+2.496137018 container remove 6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843 (image=quay.io/ceph/ceph:v20, name=intelligent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:44:57 compute-0 systemd[1]: libpod-conmon-6cc1f9cc74c18bcf0c4f778bbf437ce798c154c1a9c2a06bcb8c20b0de60e843.scope: Deactivated successfully.
Dec 13 03:44:57 compute-0 systemd[1]: libpod-conmon-81d7a916d35eb64f345b386514e6aaae94d07837db17a50137b294f5e3f6003f.scope: Deactivated successfully.
Dec 13 03:44:57 compute-0 sudo[88861]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:57 compute-0 podman[89725]: 2025-12-13 03:44:57.554374852 +0000 UTC m=+0.061892945 container create 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:44:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:44:57 compute-0 systemd[1]: Started libpod-conmon-4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0.scope.
Dec 13 03:44:57 compute-0 podman[89725]: 2025-12-13 03:44:57.521745468 +0000 UTC m=+0.029263571 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:57 compute-0 sudo[89768]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugqonvqehymyeuedgfjampnogwmjtrbq ; /usr/bin/python3'
Dec 13 03:44:57 compute-0 podman[89725]: 2025-12-13 03:44:57.669695869 +0000 UTC m=+0.177213972 container init 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:57 compute-0 sudo[89768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:44:57 compute-0 podman[89725]: 2025-12-13 03:44:57.679633212 +0000 UTC m=+0.187151295 container start 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:44:57 compute-0 podman[89725]: 2025-12-13 03:44:57.685741409 +0000 UTC m=+0.193259492 container attach 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:44:57 compute-0 python3[89771]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:44:57 compute-0 podman[89773]: 2025-12-13 03:44:57.921056501 +0000 UTC m=+0.067436107 container create f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 26.259 iops: 6722.337 elapsed_sec: 0.446
Dec 13 03:44:57 compute-0 ceph-osd[87731]: log_channel(cluster) log [WRN] : OSD bench result of 6722.337495 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 0 waiting for initial osdmap
Dec 13 03:44:57 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2[87727]: 2025-12-13T03:44:57.943+0000 7ff8f9f95640 -1 osd.2 0 waiting for initial osdmap
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 check_osdmap_features require_osd_release unknown -> tentacle
Dec 13 03:44:57 compute-0 systemd[1]: Started libpod-conmon-f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340.scope.
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:57 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-osd-2[87727]: 2025-12-13T03:44:57.971+0000 7ff8f4d9a640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 set_numa_affinity not setting numa affinity
Dec 13 03:44:57 compute-0 ceph-osd[87731]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Dec 13 03:44:57 compute-0 podman[89773]: 2025-12-13 03:44:57.885723893 +0000 UTC m=+0.032103519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:44:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcb8d396df3ca3bf3b812630047bd5f09decaaf5f17ccfe5fc6a77e77276230/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfcb8d396df3ca3bf3b812630047bd5f09decaaf5f17ccfe5fc6a77e77276230/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:58 compute-0 podman[89773]: 2025-12-13 03:44:58.02077929 +0000 UTC m=+0.167158916 container init f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:58 compute-0 podman[89773]: 2025-12-13 03:44:58.028277796 +0000 UTC m=+0.174657402 container start f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:44:58 compute-0 podman[89773]: 2025-12-13 03:44:58.031265548 +0000 UTC m=+0.177645154 container attach f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:44:58 compute-0 ceph-mon[75071]: pgmap v45: 1 pgs: 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:44:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2260105027' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:44:58 compute-0 ceph-mon[75071]: osdmap e18: 3 total, 2 up, 3 in
Dec 13 03:44:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:58 compute-0 nervous_khorana[89745]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:44:58 compute-0 nervous_khorana[89745]: --> All data devices are unavailable
Dec 13 03:44:58 compute-0 systemd[1]: libpod-4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0.scope: Deactivated successfully.
Dec 13 03:44:58 compute-0 podman[89725]: 2025-12-13 03:44:58.225224198 +0000 UTC m=+0.732742281 container died 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:44:58 compute-0 ceph-mgr[75360]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/230199501; not ready for session (expect reconnect)
Dec 13 03:44:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v47: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:44:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Dec 13 03:44:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1767187537' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:44:58 compute-0 ceph-mgr[75360]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.gsxkyu(active, since 81s)
Dec 13 03:44:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Dec 13 03:44:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-4aea3c6f67e0b0e11f1a22c1d403d7f1e981d2243ae093b3830a64393c1db3cf-merged.mount: Deactivated successfully.
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501] boot
Dec 13 03:44:58 compute-0 ceph-osd[87731]: osd.2 19 state: booting -> active
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Dec 13 03:44:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Dec 13 03:44:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:58 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:44:58 compute-0 podman[89725]: 2025-12-13 03:44:58.632994722 +0000 UTC m=+1.140512805 container remove 4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_khorana, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:44:58 compute-0 sudo[89634]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:58 compute-0 systemd[1]: libpod-conmon-4599b66708ab38124da524a4e3fd9e6da1144cc4dabb0f1325b6ff3180da47d0.scope: Deactivated successfully.
Dec 13 03:44:58 compute-0 sudo[89845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:44:58 compute-0 sudo[89845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:58 compute-0 sudo[89845]: pam_unix(sudo:session): session closed for user root
Dec 13 03:44:58 compute-0 sudo[89870]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:44:58 compute-0 sudo[89870]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.173799538 +0000 UTC m=+0.084744631 container create d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:44:59 compute-0 ceph-mon[75071]: OSD bench result of 6722.337495 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Dec 13 03:44:59 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1767187537' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:44:59 compute-0 ceph-mon[75071]: mgrmap e10: compute-0.gsxkyu(active, since 81s)
Dec 13 03:44:59 compute-0 ceph-mon[75071]: osd.2 [v2:192.168.122.100:6810/230199501,v1:192.168.122.100:6811/230199501] boot
Dec 13 03:44:59 compute-0 ceph-mon[75071]: osdmap e19: 3 total, 3 up, 3 in
Dec 13 03:44:59 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Dec 13 03:44:59 compute-0 systemd[1]: Started libpod-conmon-d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d.scope.
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.117210738 +0000 UTC m=+0.028155851 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.25097391 +0000 UTC m=+0.161919013 container init d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.259551206 +0000 UTC m=+0.170496299 container start d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.263333209 +0000 UTC m=+0.174278332 container attach d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:44:59 compute-0 youthful_allen[89922]: 167 167
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.265625112 +0000 UTC m=+0.176570215 container died d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:44:59 compute-0 systemd[1]: libpod-d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d.scope: Deactivated successfully.
Dec 13 03:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-a69d95e57afc1510bd66023ccc10b57d5986c1fceb99f46f76b1ca31de1738fd-merged.mount: Deactivated successfully.
Dec 13 03:44:59 compute-0 podman[89906]: 2025-12-13 03:44:59.307096456 +0000 UTC m=+0.218041549 container remove d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:44:59 compute-0 systemd[1]: libpod-conmon-d22f6408261939a946f3ca426a3ad65556865d591f1b8acef2b4be500a12444d.scope: Deactivated successfully.
Dec 13 03:44:59 compute-0 podman[89946]: 2025-12-13 03:44:59.470629744 +0000 UTC m=+0.031344539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:44:59 compute-0 podman[89946]: 2025-12-13 03:44:59.592660434 +0000 UTC m=+0.153375209 container create 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:44:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Dec 13 03:44:59 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1767187537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:44:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Dec 13 03:44:59 compute-0 great_kepler[89794]: pool 'volumes' created
Dec 13 03:44:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Dec 13 03:44:59 compute-0 systemd[1]: libpod-f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340.scope: Deactivated successfully.
Dec 13 03:44:59 compute-0 podman[89773]: 2025-12-13 03:44:59.728824402 +0000 UTC m=+1.875204028 container died f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:44:59 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[18,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:44:59 compute-0 systemd[1]: Started libpod-conmon-2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd.scope.
Dec 13 03:44:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d3d651023f29800d97af0540f72297fdc1840d86c5ae3ad8d98fb10fe0f537/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d3d651023f29800d97af0540f72297fdc1840d86c5ae3ad8d98fb10fe0f537/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d3d651023f29800d97af0540f72297fdc1840d86c5ae3ad8d98fb10fe0f537/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5d3d651023f29800d97af0540f72297fdc1840d86c5ae3ad8d98fb10fe0f537/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:44:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfcb8d396df3ca3bf3b812630047bd5f09decaaf5f17ccfe5fc6a77e77276230-merged.mount: Deactivated successfully.
Dec 13 03:44:59 compute-0 podman[89946]: 2025-12-13 03:44:59.802399516 +0000 UTC m=+0.363114311 container init 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:44:59 compute-0 podman[89946]: 2025-12-13 03:44:59.814462067 +0000 UTC m=+0.375176842 container start 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:44:59 compute-0 podman[89946]: 2025-12-13 03:44:59.821088158 +0000 UTC m=+0.381803013 container attach 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:44:59 compute-0 podman[89773]: 2025-12-13 03:44:59.826536438 +0000 UTC m=+1.972916044 container remove f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340 (image=quay.io/ceph/ceph:v20, name=great_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:44:59 compute-0 systemd[1]: libpod-conmon-f056f90cb5c683f1b48f259e009107af23937f064c30bca231bf808d657e1340.scope: Deactivated successfully.
Dec 13 03:44:59 compute-0 sudo[89768]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:00 compute-0 sudo[90007]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztepbqsxypwtkdyxfilkfoujpvnurtie ; /usr/bin/python3'
Dec 13 03:45:00 compute-0 sudo[90007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]: {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     "0": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "devices": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "/dev/loop3"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             ],
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_name": "ceph_lv0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_size": "21470642176",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "name": "ceph_lv0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "tags": {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.crush_device_class": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.encrypted": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_id": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.vdo": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.with_tpm": "0"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             },
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "vg_name": "ceph_vg0"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         }
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     ],
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     "1": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "devices": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "/dev/loop4"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             ],
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_name": "ceph_lv1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_size": "21470642176",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "name": "ceph_lv1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "tags": {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.crush_device_class": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.encrypted": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_id": "1",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.vdo": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.with_tpm": "0"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             },
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "vg_name": "ceph_vg1"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         }
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     ],
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     "2": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "devices": [
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "/dev/loop5"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             ],
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_name": "ceph_lv2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_size": "21470642176",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "name": "ceph_lv2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "tags": {
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.crush_device_class": "",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.encrypted": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osd_id": "2",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.vdo": "0",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:                 "ceph.with_tpm": "0"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             },
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "type": "block",
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:             "vg_name": "ceph_vg2"
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:         }
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]:     ]
Dec 13 03:45:00 compute-0 wonderful_shamir[89971]: }
Dec 13 03:45:00 compute-0 systemd[1]: libpod-2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd.scope: Deactivated successfully.
Dec 13 03:45:00 compute-0 conmon[89971]: conmon 2045207572ccb29fddb1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd.scope/container/memory.events
Dec 13 03:45:00 compute-0 podman[89946]: 2025-12-13 03:45:00.130017806 +0000 UTC m=+0.690732581 container died 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:00 compute-0 python3[90009]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:00 compute-0 ceph-mon[75071]: pgmap v47: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Dec 13 03:45:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1767187537' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:00 compute-0 ceph-mon[75071]: osdmap e20: 3 total, 3 up, 3 in
Dec 13 03:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5d3d651023f29800d97af0540f72297fdc1840d86c5ae3ad8d98fb10fe0f537-merged.mount: Deactivated successfully.
Dec 13 03:45:00 compute-0 podman[89946]: 2025-12-13 03:45:00.33179215 +0000 UTC m=+0.892506925 container remove 2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_shamir, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:00 compute-0 systemd[1]: libpod-conmon-2045207572ccb29fddb16f0aea3056ebc631a735196fb68f1893ea5cc1ba60fd.scope: Deactivated successfully.
Dec 13 03:45:00 compute-0 podman[90024]: 2025-12-13 03:45:00.379319071 +0000 UTC m=+0.139722766 container create 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:45:00 compute-0 sudo[89870]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:00 compute-0 systemd[1]: Started libpod-conmon-51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031.scope.
Dec 13 03:45:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v50: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:00 compute-0 podman[90024]: 2025-12-13 03:45:00.347331286 +0000 UTC m=+0.107734981 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee9054c4837b5b6c5f2a7c4b0f9a9c05a36949d2b0c4095ecb29a191af6946d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ee9054c4837b5b6c5f2a7c4b0f9a9c05a36949d2b0c4095ecb29a191af6946d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:00 compute-0 sudo[90041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:00 compute-0 podman[90024]: 2025-12-13 03:45:00.463985599 +0000 UTC m=+0.224389324 container init 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:00 compute-0 sudo[90041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:00 compute-0 sudo[90041]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:00 compute-0 podman[90024]: 2025-12-13 03:45:00.473017177 +0000 UTC m=+0.233420872 container start 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:00 compute-0 podman[90024]: 2025-12-13 03:45:00.477230061 +0000 UTC m=+0.237633756 container attach 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:00 compute-0 sudo[90070]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:00 compute-0 sudo[90070]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:00 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 20 pg[3.0( empty local-lis/les=0/0 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Dec 13 03:45:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Dec 13 03:45:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Dec 13 03:45:00 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 21 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=0/0 les/c/f=0/0/0 sis=20) [1] r=0 lpr=20 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.82821508 +0000 UTC m=+0.050551485 container create f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:00 compute-0 systemd[1]: Started libpod-conmon-f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3.scope.
Dec 13 03:45:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:45:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4290283271' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.806503397 +0000 UTC m=+0.028839862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.905914078 +0000 UTC m=+0.128250483 container init f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.910912065 +0000 UTC m=+0.133248470 container start f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.913983989 +0000 UTC m=+0.136320414 container attach f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 03:45:00 compute-0 confident_tesla[90142]: 167 167
Dec 13 03:45:00 compute-0 systemd[1]: libpod-f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3.scope: Deactivated successfully.
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.915107969 +0000 UTC m=+0.137444364 container died f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:45:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-e23eaac839d612e112bdd12ebed41fd73af57c824cb72e694533c3ff3c4df150-merged.mount: Deactivated successfully.
Dec 13 03:45:00 compute-0 podman[90125]: 2025-12-13 03:45:00.951342171 +0000 UTC m=+0.173678576 container remove f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_tesla, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:45:00 compute-0 systemd[1]: libpod-conmon-f9185fc6a832a45c746635411fd186f6b1e55444c033957ad87bb92d8f4939b3.scope: Deactivated successfully.
Dec 13 03:45:01 compute-0 podman[90169]: 2025-12-13 03:45:01.094371257 +0000 UTC m=+0.042810933 container create 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:01 compute-0 systemd[1]: Started libpod-conmon-404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55.scope.
Dec 13 03:45:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed49e96762730802e4ef6a573b91c6801e040c61716ea94ca63cdfaa9a851d9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed49e96762730802e4ef6a573b91c6801e040c61716ea94ca63cdfaa9a851d9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed49e96762730802e4ef6a573b91c6801e040c61716ea94ca63cdfaa9a851d9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed49e96762730802e4ef6a573b91c6801e040c61716ea94ca63cdfaa9a851d9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:01 compute-0 podman[90169]: 2025-12-13 03:45:01.168758904 +0000 UTC m=+0.117198590 container init 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:01 compute-0 podman[90169]: 2025-12-13 03:45:01.074525964 +0000 UTC m=+0.022965660 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:01 compute-0 podman[90169]: 2025-12-13 03:45:01.177225686 +0000 UTC m=+0.125665362 container start 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:01 compute-0 podman[90169]: 2025-12-13 03:45:01.181008019 +0000 UTC m=+0.129447715 container attach 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Dec 13 03:45:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4290283271' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Dec 13 03:45:01 compute-0 sad_leavitt[90045]: pool 'backups' created
Dec 13 03:45:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Dec 13 03:45:01 compute-0 ceph-mon[75071]: pgmap v50: 3 pgs: 1 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:01 compute-0 ceph-mon[75071]: osdmap e21: 3 total, 3 up, 3 in
Dec 13 03:45:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4290283271' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:01 compute-0 systemd[1]: libpod-51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031.scope: Deactivated successfully.
Dec 13 03:45:01 compute-0 podman[90024]: 2025-12-13 03:45:01.726679728 +0000 UTC m=+1.487083423 container died 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ee9054c4837b5b6c5f2a7c4b0f9a9c05a36949d2b0c4095ecb29a191af6946d-merged.mount: Deactivated successfully.
Dec 13 03:45:01 compute-0 podman[90024]: 2025-12-13 03:45:01.771852895 +0000 UTC m=+1.532256600 container remove 51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031 (image=quay.io/ceph/ceph:v20, name=sad_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:45:01 compute-0 systemd[1]: libpod-conmon-51e23247bb5c71f02c49721c7e18d2000bb4a3749877eb26e59152c69af88031.scope: Deactivated successfully.
Dec 13 03:45:01 compute-0 sudo[90007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:01 compute-0 lvm[90278]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:01 compute-0 lvm[90277]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:01 compute-0 lvm[90278]: VG ceph_vg1 finished
Dec 13 03:45:01 compute-0 lvm[90277]: VG ceph_vg0 finished
Dec 13 03:45:01 compute-0 lvm[90286]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:01 compute-0 lvm[90286]: VG ceph_vg2 finished
Dec 13 03:45:01 compute-0 sudo[90304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hisctamplixqvsixtiewtjzhoqnabvmt ; /usr/bin/python3'
Dec 13 03:45:01 compute-0 sudo[90304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:02 compute-0 pensive_pike[90186]: {}
Dec 13 03:45:02 compute-0 podman[90169]: 2025-12-13 03:45:02.064305682 +0000 UTC m=+1.012745358 container died 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:02 compute-0 systemd[1]: libpod-404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55.scope: Deactivated successfully.
Dec 13 03:45:02 compute-0 systemd[1]: libpod-404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55.scope: Consumed 1.402s CPU time.
Dec 13 03:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ed49e96762730802e4ef6a573b91c6801e040c61716ea94ca63cdfaa9a851d9b-merged.mount: Deactivated successfully.
Dec 13 03:45:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 22 pg[4.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:02 compute-0 python3[90307]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:02 compute-0 podman[90169]: 2025-12-13 03:45:02.144910078 +0000 UTC m=+1.093349774 container remove 404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:45:02 compute-0 systemd[1]: libpod-conmon-404940cc4f326a345d03780b228f8bc82dfa7fa08216ef708a4a22eefe30fe55.scope: Deactivated successfully.
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.180971875 +0000 UTC m=+0.047068099 container create d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:45:02 compute-0 sudo[90070]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:02 compute-0 systemd[1]: Started libpod-conmon-d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005.scope.
Dec 13 03:45:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a133435ac2952c755bcca05b7a0f8dc744d64272e16d88ead3108fd5d8df818/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a133435ac2952c755bcca05b7a0f8dc744d64272e16d88ead3108fd5d8df818/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:02 compute-0 sudo[90338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:02 compute-0 sudo[90338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.161563224 +0000 UTC m=+0.027659468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:02 compute-0 sudo[90338]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.266513888 +0000 UTC m=+0.132610132 container init d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.274971219 +0000 UTC m=+0.141067453 container start d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.279158633 +0000 UTC m=+0.145254887 container attach d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:45:02 compute-0 sudo[90367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:02 compute-0 sudo[90367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:02 compute-0 sudo[90367]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:02 compute-0 sudo[90392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:45:02 compute-0 sudo[90392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v53: 4 pgs: 2 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e22 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:45:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336128692' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Dec 13 03:45:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4290283271' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:02 compute-0 ceph-mon[75071]: osdmap e22: 3 total, 3 up, 3 in
Dec 13 03:45:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1336128692' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1336128692' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Dec 13 03:45:02 compute-0 laughing_hellman[90343]: pool 'images' created
Dec 13 03:45:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Dec 13 03:45:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 23 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.747774463 +0000 UTC m=+0.613870697 container died d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:02 compute-0 systemd[1]: libpod-d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005.scope: Deactivated successfully.
Dec 13 03:45:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a133435ac2952c755bcca05b7a0f8dc744d64272e16d88ead3108fd5d8df818-merged.mount: Deactivated successfully.
Dec 13 03:45:02 compute-0 podman[90481]: 2025-12-13 03:45:02.790929284 +0000 UTC m=+0.064346872 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:45:02 compute-0 podman[90321]: 2025-12-13 03:45:02.792999291 +0000 UTC m=+0.659095525 container remove d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005 (image=quay.io/ceph/ceph:v20, name=laughing_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:02 compute-0 systemd[1]: libpod-conmon-d561b99171511eb6ae84025dd74c5efe3d879b2e532adb7e79b34c1bfabb0005.scope: Deactivated successfully.
Dec 13 03:45:02 compute-0 sudo[90304]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:02 compute-0 podman[90481]: 2025-12-13 03:45:02.891468147 +0000 UTC m=+0.164885705 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:45:02 compute-0 sudo[90559]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zismdbeaarosdktslkagebtlybwuzwfg ; /usr/bin/python3'
Dec 13 03:45:02 compute-0 sudo[90559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:03 compute-0 python3[90566]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.171466153 +0000 UTC m=+0.041529128 container create a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:03 compute-0 systemd[1]: Started libpod-conmon-a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187.scope.
Dec 13 03:45:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8cca10947bef6990c63825b8ec97dbe9b5ea54c285241d02f521a815ec6537/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8cca10947bef6990c63825b8ec97dbe9b5ea54c285241d02f521a815ec6537/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.24732379 +0000 UTC m=+0.117386775 container init a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.15202161 +0000 UTC m=+0.022084605 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.254760273 +0000 UTC m=+0.124823248 container start a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.258541747 +0000 UTC m=+0.128604732 container attach a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 23 pg[5.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:03 compute-0 sudo[90392]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:03 compute-0 sudo[90712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:03 compute-0 sudo[90712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:03 compute-0 sudo[90712]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:03 compute-0 sudo[90737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:45:03 compute-0 sudo[90737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:45:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/973832875' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Dec 13 03:45:03 compute-0 ceph-mon[75071]: pgmap v53: 4 pgs: 2 unknown, 1 creating+peering, 1 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1336128692' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:03 compute-0 ceph-mon[75071]: osdmap e23: 3 total, 3 up, 3 in
Dec 13 03:45:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/973832875' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/973832875' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Dec 13 03:45:03 compute-0 festive_borg[90636]: pool 'cephfs.cephfs.meta' created
Dec 13 03:45:03 compute-0 systemd[1]: libpod-a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187.scope: Deactivated successfully.
Dec 13 03:45:03 compute-0 podman[90604]: 2025-12-13 03:45:03.927723567 +0000 UTC m=+0.797786562 container died a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Dec 13 03:45:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 24 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [2] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8cca10947bef6990c63825b8ec97dbe9b5ea54c285241d02f521a815ec6537-merged.mount: Deactivated successfully.
Dec 13 03:45:04 compute-0 podman[90604]: 2025-12-13 03:45:04.011640985 +0000 UTC m=+0.881703960 container remove a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187 (image=quay.io/ceph/ceph:v20, name=festive_borg, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:04 compute-0 systemd[1]: libpod-conmon-a1f4cfb38aa9660201f261a5425e82b58c01c2942edbc83e164c0e36bd985187.scope: Deactivated successfully.
Dec 13 03:45:04 compute-0 sudo[90559]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:04 compute-0 sudo[90737]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:04 compute-0 sudo[90837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqumxncacqiryboirrwzfwlnzglsixt ; /usr/bin/python3'
Dec 13 03:45:04 compute-0 sudo[90837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:04 compute-0 sudo[90830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:04 compute-0 sudo[90830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:04 compute-0 sudo[90830]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:04 compute-0 sudo[90860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:45:04 compute-0 sudo[90860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:04 compute-0 python3[90857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:04 compute-0 podman[90885]: 2025-12-13 03:45:04.352018483 +0000 UTC m=+0.044344884 container create 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:04 compute-0 systemd[1]: Started libpod-conmon-752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd.scope.
Dec 13 03:45:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d73aaad5383125169c254c322fc05cec71a7bd9150940b68241a334171c37d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67d73aaad5383125169c254c322fc05cec71a7bd9150940b68241a334171c37d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:04 compute-0 podman[90885]: 2025-12-13 03:45:04.333335442 +0000 UTC m=+0.025661873 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v56: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:04 compute-0 podman[90885]: 2025-12-13 03:45:04.435649242 +0000 UTC m=+0.127975663 container init 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:04 compute-0 podman[90885]: 2025-12-13 03:45:04.44320689 +0000 UTC m=+0.135533291 container start 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:04 compute-0 podman[90885]: 2025-12-13 03:45:04.446912071 +0000 UTC m=+0.139238482 container attach 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.512591649 +0000 UTC m=+0.040276894 container create daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:04 compute-0 systemd[1]: Started libpod-conmon-daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21.scope.
Dec 13 03:45:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.494698189 +0000 UTC m=+0.022383464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.591490439 +0000 UTC m=+0.119175754 container init daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.599681124 +0000 UTC m=+0.127366379 container start daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:04 compute-0 admiring_panini[90936]: 167 167
Dec 13 03:45:04 compute-0 systemd[1]: libpod-daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21.scope: Deactivated successfully.
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.60756615 +0000 UTC m=+0.135251425 container attach daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.609013909 +0000 UTC m=+0.136699154 container died daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:45:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-61b7cd71a04d297f7bac0f3e8d0602d0e5aa7087f8814664794e57ad09e9508a-merged.mount: Deactivated successfully.
Dec 13 03:45:04 compute-0 podman[90917]: 2025-12-13 03:45:04.649521798 +0000 UTC m=+0.177207043 container remove daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:04 compute-0 systemd[1]: libpod-conmon-daeb6d7cff598272e08a7f7f826e5371e164b2db2cdb9aa8d5be8a59d7106d21.scope: Deactivated successfully.
Dec 13 03:45:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Dec 13 03:45:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2501406169' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:04 compute-0 podman[90977]: 2025-12-13 03:45:04.79935092 +0000 UTC m=+0.025097588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Dec 13 03:45:05 compute-0 podman[90977]: 2025-12-13 03:45:05.020765881 +0000 UTC m=+0.246512539 container create a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/973832875' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:05 compute-0 ceph-mon[75071]: osdmap e24: 3 total, 3 up, 3 in
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2501406169' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Dec 13 03:45:05 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2501406169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Dec 13 03:45:05 compute-0 awesome_curran[90900]: pool 'cephfs.cephfs.data' created
Dec 13 03:45:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Dec 13 03:45:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=0/0 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 25 pg[6.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [0] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:05 compute-0 podman[90885]: 2025-12-13 03:45:05.07332173 +0000 UTC m=+0.765648151 container died 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:45:05 compute-0 systemd[1]: Started libpod-conmon-a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8.scope.
Dec 13 03:45:05 compute-0 systemd[1]: libpod-752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd.scope: Deactivated successfully.
Dec 13 03:45:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-67d73aaad5383125169c254c322fc05cec71a7bd9150940b68241a334171c37d-merged.mount: Deactivated successfully.
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 podman[90885]: 2025-12-13 03:45:05.12845344 +0000 UTC m=+0.820779841 container remove 752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd (image=quay.io/ceph/ceph:v20, name=awesome_curran, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:45:05 compute-0 podman[90977]: 2025-12-13 03:45:05.139895393 +0000 UTC m=+0.365642021 container init a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:45:05 compute-0 podman[90977]: 2025-12-13 03:45:05.153162256 +0000 UTC m=+0.378908884 container start a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:45:05 compute-0 podman[90977]: 2025-12-13 03:45:05.158131212 +0000 UTC m=+0.383877850 container attach a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:05 compute-0 sudo[90837]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:05 compute-0 systemd[1]: libpod-conmon-752ee085dc56f9db519f543513044cabc34a6dddd555cd6791901381feaa25bd.scope: Deactivated successfully.
Dec 13 03:45:05 compute-0 sudo[91039]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aycinrjcousckohahgddvwjvgkvbywks ; /usr/bin/python3'
Dec 13 03:45:05 compute-0 sudo[91039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:05 compute-0 python3[91042]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:05 compute-0 podman[91050]: 2025-12-13 03:45:05.582338036 +0000 UTC m=+0.050488583 container create 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:45:05 compute-0 systemd[1]: Started libpod-conmon-22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88.scope.
Dec 13 03:45:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a560b7824d7721e3a3b7084ea5decb63d5954c644665a0efa57c19d5c227dfc0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a560b7824d7721e3a3b7084ea5decb63d5954c644665a0efa57c19d5c227dfc0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:05 compute-0 podman[91050]: 2025-12-13 03:45:05.558412951 +0000 UTC m=+0.026563558 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:05 compute-0 podman[91050]: 2025-12-13 03:45:05.661051211 +0000 UTC m=+0.129201748 container init 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:45:05 compute-0 podman[91050]: 2025-12-13 03:45:05.669746079 +0000 UTC m=+0.137896626 container start 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:05 compute-0 podman[91050]: 2025-12-13 03:45:05.6745654 +0000 UTC m=+0.142715957 container attach 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:45:05 compute-0 intelligent_almeida[91003]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:05 compute-0 intelligent_almeida[91003]: --> All data devices are unavailable
Dec 13 03:45:05 compute-0 systemd[1]: libpod-a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8.scope: Deactivated successfully.
Dec 13 03:45:05 compute-0 podman[91074]: 2025-12-13 03:45:05.764162323 +0000 UTC m=+0.032831789 container died a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fac42fb2d02dc6728cd015fba72e09673eaf99902dccad75790dea70d6b43f31-merged.mount: Deactivated successfully.
Dec 13 03:45:05 compute-0 podman[91074]: 2025-12-13 03:45:05.813284059 +0000 UTC m=+0.081953505 container remove a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_almeida, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:45:05 compute-0 systemd[1]: libpod-conmon-a4aea1bb523c032f3e7a26c86224b2b0c9650e0f5d2a22c3bb2b1895473cd7a8.scope: Deactivated successfully.
Dec 13 03:45:05 compute-0 sudo[90860]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:05 compute-0 sudo[91108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:05 compute-0 sudo[91108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:05 compute-0 sudo[91108]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:05 compute-0 sudo[91133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:45:05 compute-0 sudo[91133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.267572516 +0000 UTC m=+0.019459104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Dec 13 03:45:06 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133078744' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 03:45:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.487569889 +0000 UTC m=+0.239456457 container create a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:45:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Dec 13 03:45:06 compute-0 ceph-mon[75071]: pgmap v56: 6 pgs: 5 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2501406169' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Dec 13 03:45:06 compute-0 ceph-mon[75071]: osdmap e25: 3 total, 3 up, 3 in
Dec 13 03:45:06 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 26 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=0/0 les/c/f=0/0/0 sis=25) [1] r=0 lpr=25 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:06 compute-0 systemd[1]: Started libpod-conmon-a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b.scope.
Dec 13 03:45:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.657600243 +0000 UTC m=+0.409486811 container init a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.666150237 +0000 UTC m=+0.418036805 container start a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.669798228 +0000 UTC m=+0.421684816 container attach a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:45:06 compute-0 friendly_vaughan[91185]: 167 167
Dec 13 03:45:06 compute-0 systemd[1]: libpod-a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b.scope: Deactivated successfully.
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.672616365 +0000 UTC m=+0.424503003 container died a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-9809976f5bd0a67aab638154566f33cb0da1807968d1a3c625a21763fb114171-merged.mount: Deactivated successfully.
Dec 13 03:45:06 compute-0 podman[91170]: 2025-12-13 03:45:06.713063032 +0000 UTC m=+0.464949600 container remove a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_vaughan, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:06 compute-0 systemd[1]: libpod-conmon-a41b97300efec9b8e7cf91d09e54b21c1c91bffe955eff7055a1e2a1dd10012b.scope: Deactivated successfully.
Dec 13 03:45:06 compute-0 podman[91208]: 2025-12-13 03:45:06.897592624 +0000 UTC m=+0.046480304 container create ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 03:45:06 compute-0 systemd[1]: Started libpod-conmon-ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a.scope.
Dec 13 03:45:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:06 compute-0 podman[91208]: 2025-12-13 03:45:06.879853269 +0000 UTC m=+0.028740969 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7d8af0998cd17a36d223750cbee4cdd4613f3b1492464d5b2c697ea094e61d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7d8af0998cd17a36d223750cbee4cdd4613f3b1492464d5b2c697ea094e61d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7d8af0998cd17a36d223750cbee4cdd4613f3b1492464d5b2c697ea094e61d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb7d8af0998cd17a36d223750cbee4cdd4613f3b1492464d5b2c697ea094e61d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:07 compute-0 podman[91208]: 2025-12-13 03:45:07.105939097 +0000 UTC m=+0.254826827 container init ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:07 compute-0 podman[91208]: 2025-12-13 03:45:07.114118481 +0000 UTC m=+0.263006201 container start ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:45:07 compute-0 podman[91208]: 2025-12-13 03:45:07.118555713 +0000 UTC m=+0.267443473 container attach ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]: {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     "0": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "devices": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "/dev/loop3"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             ],
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_name": "ceph_lv0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_size": "21470642176",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "name": "ceph_lv0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "tags": {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.crush_device_class": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.encrypted": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_id": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.vdo": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.with_tpm": "0"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             },
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "vg_name": "ceph_vg0"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         }
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     ],
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     "1": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "devices": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "/dev/loop4"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             ],
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_name": "ceph_lv1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_size": "21470642176",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "name": "ceph_lv1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "tags": {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.crush_device_class": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.encrypted": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_id": "1",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.vdo": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.with_tpm": "0"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             },
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "vg_name": "ceph_vg1"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         }
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     ],
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     "2": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "devices": [
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "/dev/loop5"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             ],
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_name": "ceph_lv2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_size": "21470642176",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "name": "ceph_lv2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "tags": {
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.crush_device_class": "",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.encrypted": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osd_id": "2",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.vdo": "0",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:                 "ceph.with_tpm": "0"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             },
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "type": "block",
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:             "vg_name": "ceph_vg2"
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:         }
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]:     ]
Dec 13 03:45:07 compute-0 blissful_hypatia[91225]: }
Dec 13 03:45:07 compute-0 systemd[1]: libpod-ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a.scope: Deactivated successfully.
Dec 13 03:45:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Dec 13 03:45:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2133078744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 03:45:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Dec 13 03:45:07 compute-0 flamboyant_booth[91067]: enabled application 'rbd' on pool 'vms'
Dec 13 03:45:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Dec 13 03:45:07 compute-0 podman[91234]: 2025-12-13 03:45:07.501551249 +0000 UTC m=+0.031069602 container died ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 03:45:07 compute-0 ceph-mon[75071]: pgmap v58: 7 pgs: 5 active+clean, 2 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2133078744' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Dec 13 03:45:07 compute-0 ceph-mon[75071]: osdmap e26: 3 total, 3 up, 3 in
Dec 13 03:45:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2133078744' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Dec 13 03:45:07 compute-0 ceph-mon[75071]: osdmap e27: 3 total, 3 up, 3 in
Dec 13 03:45:07 compute-0 systemd[1]: libpod-22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88.scope: Deactivated successfully.
Dec 13 03:45:07 compute-0 podman[91050]: 2025-12-13 03:45:07.515707326 +0000 UTC m=+1.983857873 container died 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb7d8af0998cd17a36d223750cbee4cdd4613f3b1492464d5b2c697ea094e61d-merged.mount: Deactivated successfully.
Dec 13 03:45:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a560b7824d7721e3a3b7084ea5decb63d5954c644665a0efa57c19d5c227dfc0-merged.mount: Deactivated successfully.
Dec 13 03:45:07 compute-0 podman[91234]: 2025-12-13 03:45:07.550210101 +0000 UTC m=+0.079728434 container remove ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_hypatia, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:45:07 compute-0 systemd[1]: libpod-conmon-ac4e75c4a92c5c5d3b0bde3fc05e4324c37c1803bb8475eae46b40ce35fff55a.scope: Deactivated successfully.
Dec 13 03:45:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:07 compute-0 podman[91050]: 2025-12-13 03:45:07.565074927 +0000 UTC m=+2.033225474 container remove 22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88 (image=quay.io/ceph/ceph:v20, name=flamboyant_booth, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:45:07 compute-0 systemd[1]: libpod-conmon-22b1369a57828d8d58961b5f9b22ae3a76e431bfa47a058213cd8a014f2a4f88.scope: Deactivated successfully.
Dec 13 03:45:07 compute-0 sudo[91039]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:07 compute-0 sudo[91133]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:07 compute-0 sudo[91264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:07 compute-0 sudo[91264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:07 compute-0 sudo[91264]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:07 compute-0 sudo[91318]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slbbckpobgjuhazszivjfwmmiatbzrys ; /usr/bin/python3'
Dec 13 03:45:07 compute-0 sudo[91318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:07 compute-0 sudo[91307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:07 compute-0 sudo[91307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:07 compute-0 python3[91337]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:08 compute-0 podman[91340]: 2025-12-13 03:45:07.905775785 +0000 UTC m=+0.030214018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:08 compute-0 podman[91340]: 2025-12-13 03:45:08.276592867 +0000 UTC m=+0.401031100 container create 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:45:08 compute-0 systemd[1]: Started libpod-conmon-098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22.scope.
Dec 13 03:45:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f970024ecbfb4c9147dc0a31aa726f0237457b45119b566ba05668766c7214d8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f970024ecbfb4c9147dc0a31aa726f0237457b45119b566ba05668766c7214d8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:08 compute-0 podman[91340]: 2025-12-13 03:45:08.467202535 +0000 UTC m=+0.591640788 container init 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:08 compute-0 podman[91340]: 2025-12-13 03:45:08.475631146 +0000 UTC m=+0.600069379 container start 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:45:08 compute-0 podman[91340]: 2025-12-13 03:45:08.479211865 +0000 UTC m=+0.603650118 container attach 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.596762653 +0000 UTC m=+0.066987065 container create b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:45:08 compute-0 systemd[1]: Started libpod-conmon-b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414.scope.
Dec 13 03:45:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.57876687 +0000 UTC m=+0.048991282 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.674000427 +0000 UTC m=+0.144224869 container init b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.679403215 +0000 UTC m=+0.149627627 container start b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.68289674 +0000 UTC m=+0.153121172 container attach b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:45:08 compute-0 inspiring_wozniak[91407]: 167 167
Dec 13 03:45:08 compute-0 systemd[1]: libpod-b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414.scope: Deactivated successfully.
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.685281686 +0000 UTC m=+0.155506098 container died b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a518c895c76969c8dcd4bd34be7a21336e9a70bd4df4ae903f0c34f91e1c6ba-merged.mount: Deactivated successfully.
Dec 13 03:45:08 compute-0 podman[91372]: 2025-12-13 03:45:08.720261734 +0000 UTC m=+0.190486146 container remove b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:08 compute-0 systemd[1]: libpod-conmon-b26f4ed2351ac16aa107bd2b05c5f3e5d4572ac279b307ba3cdf4cac0516f414.scope: Deactivated successfully.
Dec 13 03:45:08 compute-0 podman[91429]: 2025-12-13 03:45:08.877564131 +0000 UTC m=+0.054698089 container create 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:45:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Dec 13 03:45:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3754975717' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 03:45:08 compute-0 systemd[1]: Started libpod-conmon-5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953.scope.
Dec 13 03:45:08 compute-0 podman[91429]: 2025-12-13 03:45:08.852005001 +0000 UTC m=+0.029139069 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f8f5d3dc5735c3f1399f29568995895552094652704e3da83384701d465aa1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f8f5d3dc5735c3f1399f29568995895552094652704e3da83384701d465aa1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f8f5d3dc5735c3f1399f29568995895552094652704e3da83384701d465aa1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f8f5d3dc5735c3f1399f29568995895552094652704e3da83384701d465aa1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:08 compute-0 podman[91429]: 2025-12-13 03:45:08.975869832 +0000 UTC m=+0.153003830 container init 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:45:08 compute-0 podman[91429]: 2025-12-13 03:45:08.982326448 +0000 UTC m=+0.159460416 container start 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:45:08 compute-0 podman[91429]: 2025-12-13 03:45:08.986055721 +0000 UTC m=+0.163189659 container attach 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:45:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Dec 13 03:45:09 compute-0 ceph-mon[75071]: pgmap v61: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3754975717' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Dec 13 03:45:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3754975717' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 03:45:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Dec 13 03:45:09 compute-0 eloquent_wilbur[91368]: enabled application 'rbd' on pool 'volumes'
Dec 13 03:45:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Dec 13 03:45:09 compute-0 systemd[1]: libpod-098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22.scope: Deactivated successfully.
Dec 13 03:45:09 compute-0 podman[91340]: 2025-12-13 03:45:09.561772952 +0000 UTC m=+1.686211185 container died 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-f970024ecbfb4c9147dc0a31aa726f0237457b45119b566ba05668766c7214d8-merged.mount: Deactivated successfully.
Dec 13 03:45:09 compute-0 podman[91340]: 2025-12-13 03:45:09.604622835 +0000 UTC m=+1.729061068 container remove 098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22 (image=quay.io/ceph/ceph:v20, name=eloquent_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:09 compute-0 systemd[1]: libpod-conmon-098ad9f7c29a0387dddee1b51ab4af235de5a81f7cd30802f20b1a41eefadf22.scope: Deactivated successfully.
Dec 13 03:45:09 compute-0 sudo[91318]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:09 compute-0 lvm[91545]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:09 compute-0 lvm[91547]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:09 compute-0 lvm[91545]: VG ceph_vg1 finished
Dec 13 03:45:09 compute-0 lvm[91547]: VG ceph_vg0 finished
Dec 13 03:45:09 compute-0 sudo[91563]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oatxuustfrgehtnxfsifafifdyiisfgc ; /usr/bin/python3'
Dec 13 03:45:09 compute-0 sudo[91563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:09 compute-0 lvm[91565]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:09 compute-0 lvm[91565]: VG ceph_vg2 finished
Dec 13 03:45:09 compute-0 vibrant_meitner[91448]: {}
Dec 13 03:45:09 compute-0 podman[91429]: 2025-12-13 03:45:09.909659566 +0000 UTC m=+1.086793524 container died 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:09 compute-0 systemd[1]: libpod-5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953.scope: Deactivated successfully.
Dec 13 03:45:09 compute-0 systemd[1]: libpod-5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953.scope: Consumed 1.461s CPU time.
Dec 13 03:45:09 compute-0 python3[91567]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f8f5d3dc5735c3f1399f29568995895552094652704e3da83384701d465aa1-merged.mount: Deactivated successfully.
Dec 13 03:45:09 compute-0 podman[91429]: 2025-12-13 03:45:09.966712339 +0000 UTC m=+1.143846287 container remove 5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:09 compute-0 systemd[1]: libpod-conmon-5c473df9b23b97e8c0a6c89332ef491bc25a005ef0d75bb6a29b61148dad7953.scope: Deactivated successfully.
Dec 13 03:45:09 compute-0 podman[91577]: 2025-12-13 03:45:09.997717987 +0000 UTC m=+0.050505583 container create 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:45:10 compute-0 sudo[91307]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:10 compute-0 systemd[1]: Started libpod-conmon-5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3.scope.
Dec 13 03:45:10 compute-0 podman[91577]: 2025-12-13 03:45:09.974186563 +0000 UTC m=+0.026974179 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba1c68b4d1cf6c481c367407a621620aedba6e4a2987250f78149ebe69e157e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ba1c68b4d1cf6c481c367407a621620aedba6e4a2987250f78149ebe69e157e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:10 compute-0 sudo[91598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:10 compute-0 sudo[91598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:10 compute-0 sudo[91598]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:10 compute-0 podman[91577]: 2025-12-13 03:45:10.113067465 +0000 UTC m=+0.165855081 container init 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:45:10 compute-0 podman[91577]: 2025-12-13 03:45:10.122237646 +0000 UTC m=+0.175025242 container start 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:45:10 compute-0 podman[91577]: 2025-12-13 03:45:10.130113262 +0000 UTC m=+0.182900858 container attach 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:45:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Dec 13 03:45:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/225292939' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 03:45:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3754975717' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Dec 13 03:45:11 compute-0 ceph-mon[75071]: osdmap e28: 3 total, 3 up, 3 in
Dec 13 03:45:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Dec 13 03:45:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/225292939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 03:45:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Dec 13 03:45:11 compute-0 charming_bartik[91599]: enabled application 'rbd' on pool 'backups'
Dec 13 03:45:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Dec 13 03:45:11 compute-0 systemd[1]: libpod-5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3.scope: Deactivated successfully.
Dec 13 03:45:11 compute-0 podman[91577]: 2025-12-13 03:45:11.128833804 +0000 UTC m=+1.181621390 container died 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:45:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ba1c68b4d1cf6c481c367407a621620aedba6e4a2987250f78149ebe69e157e-merged.mount: Deactivated successfully.
Dec 13 03:45:11 compute-0 podman[91577]: 2025-12-13 03:45:11.203470737 +0000 UTC m=+1.256258323 container remove 5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3 (image=quay.io/ceph/ceph:v20, name=charming_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:11 compute-0 systemd[1]: libpod-conmon-5889f5995b5b95498fc25142361d20178e310e5dca0b1d1f0d7f16df5fe7cdb3.scope: Deactivated successfully.
Dec 13 03:45:11 compute-0 sudo[91563]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:11 compute-0 sudo[91685]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdyatayisjosazgwzcjjzlxbcmbsybyc ; /usr/bin/python3'
Dec 13 03:45:11 compute-0 sudo[91685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:11 compute-0 python3[91687]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:11 compute-0 podman[91688]: 2025-12-13 03:45:11.612272019 +0000 UTC m=+0.032850450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:11 compute-0 podman[91688]: 2025-12-13 03:45:11.930300207 +0000 UTC m=+0.350878648 container create d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:45:12 compute-0 systemd[1]: Started libpod-conmon-d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134.scope.
Dec 13 03:45:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aae96958f46e65004865dd90ba80485446e4c01bc7a5d78a7d6d75235925c6e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7aae96958f46e65004865dd90ba80485446e4c01bc7a5d78a7d6d75235925c6e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:12 compute-0 ceph-mon[75071]: pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/225292939' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Dec 13 03:45:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/225292939' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Dec 13 03:45:12 compute-0 ceph-mon[75071]: osdmap e29: 3 total, 3 up, 3 in
Dec 13 03:45:12 compute-0 podman[91688]: 2025-12-13 03:45:12.130399354 +0000 UTC m=+0.550977845 container init d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:12 compute-0 podman[91688]: 2025-12-13 03:45:12.138079515 +0000 UTC m=+0.558657936 container start d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:45:12 compute-0 podman[91688]: 2025-12-13 03:45:12.14156684 +0000 UTC m=+0.562145251 container attach d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Dec 13 03:45:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4035667787' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 03:45:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Dec 13 03:45:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4035667787' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Dec 13 03:45:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4035667787' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 03:45:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Dec 13 03:45:13 compute-0 adoring_easley[91702]: enabled application 'rbd' on pool 'images'
Dec 13 03:45:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Dec 13 03:45:13 compute-0 systemd[1]: libpod-d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134.scope: Deactivated successfully.
Dec 13 03:45:13 compute-0 podman[91688]: 2025-12-13 03:45:13.516453541 +0000 UTC m=+1.937031972 container died d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:45:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-7aae96958f46e65004865dd90ba80485446e4c01bc7a5d78a7d6d75235925c6e-merged.mount: Deactivated successfully.
Dec 13 03:45:13 compute-0 podman[91688]: 2025-12-13 03:45:13.861062235 +0000 UTC m=+2.281640646 container remove d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134 (image=quay.io/ceph/ceph:v20, name=adoring_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:13 compute-0 sudo[91685]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:13 compute-0 systemd[1]: libpod-conmon-d72ec075498fb051ea72afa0816e006bf38212daa1f27ec42a6c7992654c6134.scope: Deactivated successfully.
Dec 13 03:45:13 compute-0 sudo[91764]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-futqhkfnkkgoxwfllcedghhyqgsybgec ; /usr/bin/python3'
Dec 13 03:45:14 compute-0 sudo[91764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:14 compute-0 python3[91766]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:14 compute-0 podman[91767]: 2025-12-13 03:45:14.194480043 +0000 UTC m=+0.042569066 container create d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:14 compute-0 systemd[1]: Started libpod-conmon-d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f.scope.
Dec 13 03:45:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ac63ac5ed393cf13c4594c282e1d513c6f52a327072e19e12387b2b303ad2f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0ac63ac5ed393cf13c4594c282e1d513c6f52a327072e19e12387b2b303ad2f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:14 compute-0 podman[91767]: 2025-12-13 03:45:14.268023697 +0000 UTC m=+0.116112740 container init d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:45:14 compute-0 podman[91767]: 2025-12-13 03:45:14.174586689 +0000 UTC m=+0.022675742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:14 compute-0 podman[91767]: 2025-12-13 03:45:14.27764524 +0000 UTC m=+0.125734263 container start d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:14 compute-0 podman[91767]: 2025-12-13 03:45:14.280874499 +0000 UTC m=+0.128963582 container attach d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:45:14 compute-0 ceph-mon[75071]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4035667787' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Dec 13 03:45:14 compute-0 ceph-mon[75071]: osdmap e30: 3 total, 3 up, 3 in
Dec 13 03:45:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Dec 13 03:45:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942606667' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 03:45:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Dec 13 03:45:15 compute-0 ceph-mon[75071]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3942606667' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Dec 13 03:45:15 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3942606667' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 03:45:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Dec 13 03:45:15 compute-0 suspicious_benz[91782]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Dec 13 03:45:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Dec 13 03:45:15 compute-0 systemd[1]: libpod-d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f.scope: Deactivated successfully.
Dec 13 03:45:15 compute-0 podman[91767]: 2025-12-13 03:45:15.698699315 +0000 UTC m=+1.546788338 container died d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0ac63ac5ed393cf13c4594c282e1d513c6f52a327072e19e12387b2b303ad2f-merged.mount: Deactivated successfully.
Dec 13 03:45:16 compute-0 podman[91767]: 2025-12-13 03:45:16.107515107 +0000 UTC m=+1.955604130 container remove d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f (image=quay.io/ceph/ceph:v20, name=suspicious_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:16 compute-0 sudo[91764]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:16 compute-0 systemd[1]: libpod-conmon-d4b2bd0ceb0108c2ad2b3562929404d889b5242d030090d7be91ea70ad81b88f.scope: Deactivated successfully.
Dec 13 03:45:16 compute-0 sudo[91842]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgfofcvepadjbhiwjsclbnxwlfbngmvj ; /usr/bin/python3'
Dec 13 03:45:16 compute-0 sudo[91842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:16 compute-0 python3[91844]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:16 compute-0 podman[91845]: 2025-12-13 03:45:16.471984595 +0000 UTC m=+0.051532092 container create d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:16 compute-0 systemd[1]: Started libpod-conmon-d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9.scope.
Dec 13 03:45:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97039cad95fe32325435aba97f47fbf64a3353ec65bd86ad6683044ec114305/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a97039cad95fe32325435aba97f47fbf64a3353ec65bd86ad6683044ec114305/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:16 compute-0 podman[91845]: 2025-12-13 03:45:16.443572728 +0000 UTC m=+0.023120255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:16 compute-0 podman[91845]: 2025-12-13 03:45:16.546938518 +0000 UTC m=+0.126486015 container init d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:16 compute-0 podman[91845]: 2025-12-13 03:45:16.552743686 +0000 UTC m=+0.132291183 container start d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:45:16 compute-0 podman[91845]: 2025-12-13 03:45:16.556123249 +0000 UTC m=+0.135670746 container attach d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:45:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3942606667' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Dec 13 03:45:16 compute-0 ceph-mon[75071]: osdmap e31: 3 total, 3 up, 3 in
Dec 13 03:45:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Dec 13 03:45:16 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3867067651' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 03:45:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Dec 13 03:45:17 compute-0 ceph-mon[75071]: pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3867067651' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Dec 13 03:45:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3867067651' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 03:45:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Dec 13 03:45:17 compute-0 festive_lewin[91861]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Dec 13 03:45:17 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Dec 13 03:45:17 compute-0 systemd[1]: libpod-d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9.scope: Deactivated successfully.
Dec 13 03:45:17 compute-0 podman[91845]: 2025-12-13 03:45:17.727536719 +0000 UTC m=+1.307084236 container died d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-a97039cad95fe32325435aba97f47fbf64a3353ec65bd86ad6683044ec114305-merged.mount: Deactivated successfully.
Dec 13 03:45:17 compute-0 podman[91845]: 2025-12-13 03:45:17.768985904 +0000 UTC m=+1.348533401 container remove d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9 (image=quay.io/ceph/ceph:v20, name=festive_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:17 compute-0 systemd[1]: libpod-conmon-d91a86914884138d758c37f69e9381834af46e0fec7b65c9c717a09f531739b9.scope: Deactivated successfully.
Dec 13 03:45:17 compute-0 sudo[91842]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3867067651' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Dec 13 03:45:18 compute-0 ceph-mon[75071]: osdmap e32: 3 total, 3 up, 3 in
Dec 13 03:45:18 compute-0 python3[91973]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:45:19 compute-0 python3[92044]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597518.5023336-36483-183612106219980/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:45:19 compute-0 sudo[92144]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aavoenrgqmcyxkiwququlgobzawmdtvy ; /usr/bin/python3'
Dec 13 03:45:19 compute-0 sudo[92144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:19 compute-0 ceph-mon[75071]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:19 compute-0 python3[92146]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:45:19 compute-0 sudo[92144]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:19 compute-0 sudo[92219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqqgcbnfbivxexoyfwejfehkuloepxow ; /usr/bin/python3'
Dec 13 03:45:19 compute-0 sudo[92219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:20 compute-0 python3[92221]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597519.454104-36497-3809380325417/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=5032aff8fae01c0a6a150fbfbd2f116bc50c61b7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:45:20 compute-0 sudo[92219]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:20 compute-0 sudo[92269]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osaygsexggkoykujfycftosrwnvydiwd ; /usr/bin/python3'
Dec 13 03:45:20 compute-0 sudo[92269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:20 compute-0 python3[92271]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:20 compute-0 podman[92272]: 2025-12-13 03:45:20.591621989 +0000 UTC m=+0.098886038 container create f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:20 compute-0 podman[92272]: 2025-12-13 03:45:20.512285157 +0000 UTC m=+0.019549226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:20 compute-0 systemd[1]: Started libpod-conmon-f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb.scope.
Dec 13 03:45:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e4f4b2e556015c198cc9b11e49be8ea1744a583b4a963c9b5e981cd2fb5b4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e4f4b2e556015c198cc9b11e49be8ea1744a583b4a963c9b5e981cd2fb5b4f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4e4f4b2e556015c198cc9b11e49be8ea1744a583b4a963c9b5e981cd2fb5b4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:20 compute-0 podman[92272]: 2025-12-13 03:45:20.674277042 +0000 UTC m=+0.181541111 container init f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:45:20 compute-0 podman[92272]: 2025-12-13 03:45:20.680986826 +0000 UTC m=+0.188250875 container start f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:45:20 compute-0 podman[92272]: 2025-12-13 03:45:20.684261255 +0000 UTC m=+0.191525324 container attach f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:45:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Dec 13 03:45:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3915310915' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:45:21 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3915310915' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 03:45:21 compute-0 nice_buck[92287]: 
Dec 13 03:45:21 compute-0 nice_buck[92287]: [global]
Dec 13 03:45:21 compute-0 nice_buck[92287]:         fsid = 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:45:21 compute-0 nice_buck[92287]:         mon_host = 192.168.122.100
Dec 13 03:45:21 compute-0 nice_buck[92287]:         rgw_keystone_api_version = 3
Dec 13 03:45:21 compute-0 systemd[1]: libpod-f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb.scope: Deactivated successfully.
Dec 13 03:45:21 compute-0 podman[92272]: 2025-12-13 03:45:21.099914645 +0000 UTC m=+0.607178694 container died f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4e4f4b2e556015c198cc9b11e49be8ea1744a583b4a963c9b5e981cd2fb5b4f-merged.mount: Deactivated successfully.
Dec 13 03:45:21 compute-0 sudo[92312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:21 compute-0 sudo[92312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:21 compute-0 sudo[92312]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:21 compute-0 podman[92272]: 2025-12-13 03:45:21.15417091 +0000 UTC m=+0.661434959 container remove f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb (image=quay.io/ceph/ceph:v20, name=nice_buck, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:45:21 compute-0 systemd[1]: libpod-conmon-f74bf4f07c31c90cc189ae6e5961ad12e27b18f1a1ee57e8d91916059fa430eb.scope: Deactivated successfully.
Dec 13 03:45:21 compute-0 sudo[92269]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:21 compute-0 sudo[92349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:45:21 compute-0 sudo[92349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:21 compute-0 sudo[92397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyqmuumatymvkynljtcqdbhskcwjrbzm ; /usr/bin/python3'
Dec 13 03:45:21 compute-0 sudo[92397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:21 compute-0 python3[92399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:21 compute-0 podman[92426]: 2025-12-13 03:45:21.509273502 +0000 UTC m=+0.040841549 container create bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:21 compute-0 systemd[1]: Started libpod-conmon-bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc.scope.
Dec 13 03:45:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cdf6a97ec4df1cccd14dec44a595f3a13f2846aa8e8633bff97f235ead9d7b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cdf6a97ec4df1cccd14dec44a595f3a13f2846aa8e8633bff97f235ead9d7b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66cdf6a97ec4df1cccd14dec44a595f3a13f2846aa8e8633bff97f235ead9d7b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:21 compute-0 podman[92426]: 2025-12-13 03:45:21.578154028 +0000 UTC m=+0.109722095 container init bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:45:21 compute-0 podman[92426]: 2025-12-13 03:45:21.490183049 +0000 UTC m=+0.021751116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:21 compute-0 podman[92426]: 2025-12-13 03:45:21.587075363 +0000 UTC m=+0.118643410 container start bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:21 compute-0 podman[92426]: 2025-12-13 03:45:21.598864765 +0000 UTC m=+0.130432812 container attach bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:45:21 compute-0 podman[92458]: 2025-12-13 03:45:21.615278195 +0000 UTC m=+0.067798777 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:45:21 compute-0 podman[92458]: 2025-12-13 03:45:21.708008333 +0000 UTC m=+0.160528885 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:45:21 compute-0 ceph-mon[75071]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3915310915' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Dec 13 03:45:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3915310915' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4185003505' entity='client.admin' 
Dec 13 03:45:22 compute-0 affectionate_banzai[92460]: set ssl_option
Dec 13 03:45:22 compute-0 systemd[1]: libpod-bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc.scope: Deactivated successfully.
Dec 13 03:45:22 compute-0 podman[92426]: 2025-12-13 03:45:22.132993328 +0000 UTC m=+0.664561365 container died bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-66cdf6a97ec4df1cccd14dec44a595f3a13f2846aa8e8633bff97f235ead9d7b-merged.mount: Deactivated successfully.
Dec 13 03:45:22 compute-0 podman[92426]: 2025-12-13 03:45:22.1692524 +0000 UTC m=+0.700820447 container remove bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc (image=quay.io/ceph/ceph:v20, name=affectionate_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:45:22 compute-0 systemd[1]: libpod-conmon-bfe2c27ccee84ff141f76d78b69b5b5bcb9031a030c65bd60510558e6a2d93dc.scope: Deactivated successfully.
Dec 13 03:45:22 compute-0 sudo[92397]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:22 compute-0 sudo[92349]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:22 compute-0 sudo[92644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:22 compute-0 sudo[92644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:22 compute-0 sudo[92644]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:22 compute-0 sudo[92691]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hittqhunuojciyjlfnsznxzyjutqylki ; /usr/bin/python3'
Dec 13 03:45:22 compute-0 sudo[92691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:22 compute-0 sudo[92694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:45:22 compute-0 sudo[92694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:22 compute-0 python3[92695]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:22 compute-0 podman[92720]: 2025-12-13 03:45:22.509487085 +0000 UTC m=+0.038007692 container create 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:45:22 compute-0 systemd[1]: Started libpod-conmon-19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87.scope.
Dec 13 03:45:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d897c0c0ce9c1cc401964917513d691d6cd540ed91af3acf3ad2482725c5bb5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d897c0c0ce9c1cc401964917513d691d6cd540ed91af3acf3ad2482725c5bb5/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d897c0c0ce9c1cc401964917513d691d6cd540ed91af3acf3ad2482725c5bb5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 podman[92720]: 2025-12-13 03:45:22.493359824 +0000 UTC m=+0.021880451 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:22 compute-0 podman[92720]: 2025-12-13 03:45:22.590992367 +0000 UTC m=+0.119512994 container init 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:22 compute-0 podman[92720]: 2025-12-13 03:45:22.596680702 +0000 UTC m=+0.125201309 container start 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:22 compute-0 podman[92720]: 2025-12-13 03:45:22.599851959 +0000 UTC m=+0.128372596 container attach 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.626783617 +0000 UTC m=+0.036629494 container create a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:45:22 compute-0 systemd[1]: Started libpod-conmon-a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4.scope.
Dec 13 03:45:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.689706639 +0000 UTC m=+0.099552516 container init a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.694822719 +0000 UTC m=+0.104668606 container start a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:45:22 compute-0 zealous_varahamihira[92769]: 167 167
Dec 13 03:45:22 compute-0 systemd[1]: libpod-a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4.scope: Deactivated successfully.
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.698677864 +0000 UTC m=+0.108523801 container attach a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.699003284 +0000 UTC m=+0.108849161 container died a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.611863638 +0000 UTC m=+0.021709515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf95b2e60de66587051c50bbf4e10a1c6670dd6d886a9f9a112bbe78d1fcc20a-merged.mount: Deactivated successfully.
Dec 13 03:45:22 compute-0 podman[92751]: 2025-12-13 03:45:22.728974004 +0000 UTC m=+0.138819881 container remove a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:22 compute-0 systemd[1]: libpod-conmon-a76683f599f794748319268d24c0293cf05bb208633f32f37f6976972deb62a4.scope: Deactivated successfully.
Dec 13 03:45:22 compute-0 podman[92810]: 2025-12-13 03:45:22.870989952 +0000 UTC m=+0.041933499 container create 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:22 compute-0 systemd[1]: Started libpod-conmon-966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe.scope.
Dec 13 03:45:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:22 compute-0 podman[92810]: 2025-12-13 03:45:22.855100157 +0000 UTC m=+0.026043724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:22 compute-0 podman[92810]: 2025-12-13 03:45:22.973126428 +0000 UTC m=+0.144069975 container init 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:45:22 compute-0 podman[92810]: 2025-12-13 03:45:22.98341525 +0000 UTC m=+0.154358797 container start 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:45:22 compute-0 podman[92810]: 2025-12-13 03:45:22.986008571 +0000 UTC m=+0.156952118 container attach 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:23 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:23 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:23 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 03:45:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:23 compute-0 jolly_darwin[92738]: Scheduled rgw.rgw update...
Dec 13 03:45:23 compute-0 systemd[1]: libpod-19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87.scope: Deactivated successfully.
Dec 13 03:45:23 compute-0 podman[92720]: 2025-12-13 03:45:23.034261362 +0000 UTC m=+0.562782049 container died 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:45:23 compute-0 podman[92720]: 2025-12-13 03:45:23.073762293 +0000 UTC m=+0.602282920 container remove 19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87 (image=quay.io/ceph/ceph:v20, name=jolly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:23 compute-0 systemd[1]: libpod-conmon-19d49186e43aa031b1736111cc2d60ecb08cc13ef4b2fde8bb535753c6aa9f87.scope: Deactivated successfully.
Dec 13 03:45:23 compute-0 sudo[92691]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4185003505' entity='client.admin' 
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d897c0c0ce9c1cc401964917513d691d6cd540ed91af3acf3ad2482725c5bb5-merged.mount: Deactivated successfully.
Dec 13 03:45:23 compute-0 loving_agnesi[92827]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:23 compute-0 loving_agnesi[92827]: --> All data devices are unavailable
Dec 13 03:45:23 compute-0 systemd[1]: libpod-966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe.scope: Deactivated successfully.
Dec 13 03:45:23 compute-0 podman[92810]: 2025-12-13 03:45:23.423858798 +0000 UTC m=+0.594802355 container died 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0c050b5fa5c9636eecd2ab42329834996e0629c9da71d800b9d7dd85de01795-merged.mount: Deactivated successfully.
Dec 13 03:45:23 compute-0 podman[92810]: 2025-12-13 03:45:23.462830175 +0000 UTC m=+0.633773742 container remove 966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_agnesi, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:45:23 compute-0 systemd[1]: libpod-conmon-966525e62af09a0e108d6514e66151eea3e732fe06a63eda9d210124c6719bbe.scope: Deactivated successfully.
Dec 13 03:45:23 compute-0 sudo[92694]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:23 compute-0 sudo[92875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:23 compute-0 sudo[92875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:23 compute-0 sudo[92875]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:23 compute-0 sudo[92900]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:45:23 compute-0 sudo[92900]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.880052648 +0000 UTC m=+0.036197392 container create cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:45:23 compute-0 systemd[1]: Started libpod-conmon-cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4.scope.
Dec 13 03:45:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.950033424 +0000 UTC m=+0.106178178 container init cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.956023657 +0000 UTC m=+0.112168391 container start cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:45:23 compute-0 vibrant_cerf[92976]: 167 167
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.959727719 +0000 UTC m=+0.115872473 container attach cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:23 compute-0 systemd[1]: libpod-cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4.scope: Deactivated successfully.
Dec 13 03:45:23 compute-0 conmon[92976]: conmon cc0575016890fbbc5e9d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4.scope/container/memory.events
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.960834289 +0000 UTC m=+0.116979033 container died cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.864929994 +0000 UTC m=+0.021074758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-b61dd449f725c1c5e5d53cd0ed8d02b9ac7d3cfd347a644683c71648ebee8c54-merged.mount: Deactivated successfully.
Dec 13 03:45:23 compute-0 podman[92937]: 2025-12-13 03:45:23.99153322 +0000 UTC m=+0.147677964 container remove cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cerf, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:45:23 compute-0 systemd[1]: libpod-conmon-cc0575016890fbbc5e9dcade497ea22abb1c7cffb50a7d6c0a4ee035528dc1e4.scope: Deactivated successfully.
Dec 13 03:45:24 compute-0 ceph-mon[75071]: pgmap v73: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:24 compute-0 ceph-mon[75071]: from='client.14234 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:24 compute-0 ceph-mon[75071]: Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.13033057 +0000 UTC m=+0.043311127 container create d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:45:24 compute-0 python3[93045]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:45:24 compute-0 systemd[1]: Started libpod-conmon-d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b.scope.
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.109066368 +0000 UTC m=+0.022046945 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68990f59762334f57ae3caf66661155fbb1a02f0089b42c9b82f0b381487c8d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68990f59762334f57ae3caf66661155fbb1a02f0089b42c9b82f0b381487c8d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68990f59762334f57ae3caf66661155fbb1a02f0089b42c9b82f0b381487c8d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68990f59762334f57ae3caf66661155fbb1a02f0089b42c9b82f0b381487c8d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.220285043 +0000 UTC m=+0.133265600 container init d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.228221939 +0000 UTC m=+0.141202516 container start d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.232367183 +0000 UTC m=+0.145347760 container attach d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:24 compute-0 python3[93143]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597523.899919-36538-180435071044363/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:45:24 compute-0 angry_johnson[93069]: {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     "0": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "devices": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "/dev/loop3"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             ],
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_name": "ceph_lv0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_size": "21470642176",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "name": "ceph_lv0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "tags": {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.crush_device_class": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.encrypted": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_id": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.vdo": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.with_tpm": "0"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             },
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "vg_name": "ceph_vg0"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         }
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     ],
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     "1": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "devices": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "/dev/loop4"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             ],
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_name": "ceph_lv1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_size": "21470642176",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "name": "ceph_lv1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "tags": {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.crush_device_class": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.encrypted": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_id": "1",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.vdo": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.with_tpm": "0"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             },
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "vg_name": "ceph_vg1"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         }
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     ],
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     "2": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "devices": [
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "/dev/loop5"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             ],
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_name": "ceph_lv2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_size": "21470642176",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "name": "ceph_lv2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "tags": {
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.crush_device_class": "",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.encrypted": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osd_id": "2",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.vdo": "0",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:                 "ceph.with_tpm": "0"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             },
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "type": "block",
Dec 13 03:45:24 compute-0 angry_johnson[93069]:             "vg_name": "ceph_vg2"
Dec 13 03:45:24 compute-0 angry_johnson[93069]:         }
Dec 13 03:45:24 compute-0 angry_johnson[93069]:     ]
Dec 13 03:45:24 compute-0 angry_johnson[93069]: }
Dec 13 03:45:24 compute-0 systemd[1]: libpod-d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b.scope: Deactivated successfully.
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.535775199 +0000 UTC m=+0.448755756 container died d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-68990f59762334f57ae3caf66661155fbb1a02f0089b42c9b82f0b381487c8d4-merged.mount: Deactivated successfully.
Dec 13 03:45:24 compute-0 podman[93051]: 2025-12-13 03:45:24.57816783 +0000 UTC m=+0.491148397 container remove d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:45:24 compute-0 systemd[1]: libpod-conmon-d043ce1bd4347f79ec4bca59edfe85aa49319e86f780bc9168a5ee8be7ba256b.scope: Deactivated successfully.
Dec 13 03:45:24 compute-0 sudo[92900]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:24 compute-0 sudo[93182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:24 compute-0 sudo[93182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:24 compute-0 sudo[93182]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:24 compute-0 sudo[93207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:24 compute-0 sudo[93207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:24 compute-0 sudo[93255]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouccmudakovpxcplvixkjsxlhleuruuu ; /usr/bin/python3'
Dec 13 03:45:24 compute-0 sudo[93255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:24 compute-0 python3[93257]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.014390913 +0000 UTC m=+0.039374009 container create a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.03147318 +0000 UTC m=+0.056980400 container create 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:45:25 compute-0 systemd[1]: Started libpod-conmon-a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3.scope.
Dec 13 03:45:25 compute-0 systemd[1]: Started libpod-conmon-00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2.scope.
Dec 13 03:45:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66df07c1c39a7c85aad645809a76ad6cc639633450188aeb9af860d2d4f77293/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66df07c1c39a7c85aad645809a76ad6cc639633450188aeb9af860d2d4f77293/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66df07c1c39a7c85aad645809a76ad6cc639633450188aeb9af860d2d4f77293/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.084165203 +0000 UTC m=+0.109672453 container init 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.088056269 +0000 UTC m=+0.113039385 container init a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.090871436 +0000 UTC m=+0.116378656 container start 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.092878111 +0000 UTC m=+0.117861207 container start a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:45:25 compute-0 frosty_carver[93302]: 167 167
Dec 13 03:45:25 compute-0 systemd[1]: libpod-00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2.scope: Deactivated successfully.
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.095159604 +0000 UTC m=+0.120666844 container attach 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:24.9982404 +0000 UTC m=+0.023223516 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.09649423 +0000 UTC m=+0.122001450 container died 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.100230123 +0000 UTC m=+0.125213239 container attach a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.009619462 +0000 UTC m=+0.035126712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-1cff554267cdde13b1c122d2df2e2a160d6cc1f73fc18d33de429b0d4432f3b7-merged.mount: Deactivated successfully.
Dec 13 03:45:25 compute-0 podman[93270]: 2025-12-13 03:45:25.148383921 +0000 UTC m=+0.173891141 container remove 00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_carver, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:45:25 compute-0 systemd[1]: libpod-conmon-00359461bbd7daf67993c923eb5b3e7fd4fd4b88dbb7855b47ad6e58cc0e33f2.scope: Deactivated successfully.
Dec 13 03:45:25 compute-0 podman[93346]: 2025-12-13 03:45:25.310116929 +0000 UTC m=+0.057130485 container create 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:45:25 compute-0 systemd[1]: Started libpod-conmon-745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b.scope.
Dec 13 03:45:25 compute-0 podman[93346]: 2025-12-13 03:45:25.274854054 +0000 UTC m=+0.021867630 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3c79ce785276e588760136d0323830296864cb1a6c64edc61989d11fb994a6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3c79ce785276e588760136d0323830296864cb1a6c64edc61989d11fb994a6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3c79ce785276e588760136d0323830296864cb1a6c64edc61989d11fb994a6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc3c79ce785276e588760136d0323830296864cb1a6c64edc61989d11fb994a6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:25 compute-0 podman[93346]: 2025-12-13 03:45:25.418042304 +0000 UTC m=+0.165055880 container init 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:45:25 compute-0 podman[93346]: 2025-12-13 03:45:25.423835442 +0000 UTC m=+0.170848998 container start 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:25 compute-0 podman[93346]: 2025-12-13 03:45:25.42774598 +0000 UTC m=+0.174759556 container attach 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:25 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:25 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 03:45:25 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0[75067]: 2025-12-13T03:45:25.574+0000 7f5779a87640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e2 new map
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2025-12-13T03:45:25:573679+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T03:45:25.573242+0000
                                           modified        2025-12-13T03:45:25.573242+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Dec 13 03:45:25 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:25 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 03:45:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:25 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Dec 13 03:45:25 compute-0 systemd[1]: libpod-a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3.scope: Deactivated successfully.
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.624733232 +0000 UTC m=+0.649716328 container died a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:45:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-66df07c1c39a7c85aad645809a76ad6cc639633450188aeb9af860d2d4f77293-merged.mount: Deactivated successfully.
Dec 13 03:45:25 compute-0 podman[93269]: 2025-12-13 03:45:25.854290687 +0000 UTC m=+0.879273783 container remove a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3 (image=quay.io/ceph/ceph:v20, name=admiring_wiles, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:25 compute-0 sudo[93255]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:25 compute-0 systemd[1]: libpod-conmon-a0483b2ff642c9a877f241b99e168b0be5f7e0f36a1eca9b00994f96b1a8ebc3.scope: Deactivated successfully.
Dec 13 03:45:26 compute-0 sudo[93470]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-matmgpqdexhcikhdeihjvukzsmmwffgd ; /usr/bin/python3'
Dec 13 03:45:26 compute-0 sudo[93470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:26 compute-0 lvm[93478]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:26 compute-0 lvm[93478]: VG ceph_vg0 finished
Dec 13 03:45:26 compute-0 lvm[93481]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:26 compute-0 lvm[93481]: VG ceph_vg1 finished
Dec 13 03:45:26 compute-0 lvm[93483]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:26 compute-0 lvm[93483]: VG ceph_vg2 finished
Dec 13 03:45:26 compute-0 lvm[93484]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:26 compute-0 lvm[93484]: VG ceph_vg1 finished
Dec 13 03:45:26 compute-0 ceph-mon[75071]: pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Dec 13 03:45:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Dec 13 03:45:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Dec 13 03:45:26 compute-0 ceph-mon[75071]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Dec 13 03:45:26 compute-0 ceph-mon[75071]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Dec 13 03:45:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Dec 13 03:45:26 compute-0 ceph-mon[75071]: osdmap e33: 3 total, 3 up, 3 in
Dec 13 03:45:26 compute-0 ceph-mon[75071]: fsmap cephfs:0
Dec 13 03:45:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:26 compute-0 python3[93475]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:26 compute-0 infallible_tesla[93363]: {}
Dec 13 03:45:26 compute-0 systemd[1]: libpod-745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b.scope: Deactivated successfully.
Dec 13 03:45:26 compute-0 systemd[1]: libpod-745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b.scope: Consumed 1.237s CPU time.
Dec 13 03:45:26 compute-0 podman[93346]: 2025-12-13 03:45:26.212316889 +0000 UTC m=+0.959330455 container died 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc3c79ce785276e588760136d0323830296864cb1a6c64edc61989d11fb994a6-merged.mount: Deactivated successfully.
Dec 13 03:45:26 compute-0 podman[93346]: 2025-12-13 03:45:26.308845611 +0000 UTC m=+1.055859167 container remove 745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True)
Dec 13 03:45:26 compute-0 systemd[1]: libpod-conmon-745345e4cd6b11d436375ff45ad7711ad0d3fd41a524c27429b6531abc8f3c6b.scope: Deactivated successfully.
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.347010936 +0000 UTC m=+0.144180138 container create 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:45:26 compute-0 sudo[93207]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:26 compute-0 systemd[1]: Started libpod-conmon-8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b.scope.
Dec 13 03:45:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:26 compute-0 sudo[93515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27e613010ad46796698caf81740710396d89ebeb459876680a63a23c65966d/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27e613010ad46796698caf81740710396d89ebeb459876680a63a23c65966d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc27e613010ad46796698caf81740710396d89ebeb459876680a63a23c65966d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:26 compute-0 sudo[93515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.33071885 +0000 UTC m=+0.127888082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:26 compute-0 sudo[93515]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.433615948 +0000 UTC m=+0.230785180 container init 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:45:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.439867739 +0000 UTC m=+0.237036941 container start 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.44319961 +0000 UTC m=+0.240368842 container attach 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:45:26 compute-0 sudo[93544]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:26 compute-0 sudo[93544]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:26 compute-0 sudo[93544]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:26 compute-0 sudo[93569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:45:26 compute-0 sudo[93569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:26 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:26 compute-0 ceph-mgr[75360]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:26 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 03:45:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:26 compute-0 nervous_darwin[93523]: Scheduled mds.cephfs update...
Dec 13 03:45:26 compute-0 systemd[1]: libpod-8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b.scope: Deactivated successfully.
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.891688277 +0000 UTC m=+0.688857479 container died 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:45:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fc27e613010ad46796698caf81740710396d89ebeb459876680a63a23c65966d-merged.mount: Deactivated successfully.
Dec 13 03:45:26 compute-0 podman[93487]: 2025-12-13 03:45:26.923789035 +0000 UTC m=+0.720958237 container remove 8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b (image=quay.io/ceph/ceph:v20, name=nervous_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:26 compute-0 systemd[1]: libpod-conmon-8c0d19133015bd37b411dfeabbd93a845eadb0f76cbca4e33d55c31815e2611b.scope: Deactivated successfully.
Dec 13 03:45:26 compute-0 sudo[93470]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:26 compute-0 podman[93659]: 2025-12-13 03:45:26.960468216 +0000 UTC m=+0.053561630 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Dec 13 03:45:27 compute-0 podman[93659]: 2025-12-13 03:45:27.04949711 +0000 UTC m=+0.142590524 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 sudo[93886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tczbooehndprhxlbrhszzpwuigdbzzfw ; /usr/bin/python3'
Dec 13 03:45:27 compute-0 sudo[93886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:27 compute-0 sudo[93569]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:27 compute-0 sudo[93896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:27 compute-0 sudo[93896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:27 compute-0 sudo[93896]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:27 compute-0 python3[93895]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec 13 03:45:27 compute-0 sudo[93886]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:27 compute-0 sudo[93921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:45:27 compute-0 sudo[93921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:27 compute-0 sudo[94016]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scovrnvdgckwcnzfujgttkmyyvolrfyw ; /usr/bin/python3'
Dec 13 03:45:27 compute-0 sudo[94016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:27 compute-0 podman[94031]: 2025-12-13 03:45:27.92642521 +0000 UTC m=+0.043972153 container create 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:45:27 compute-0 systemd[1]: Started libpod-conmon-7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2.scope.
Dec 13 03:45:27 compute-0 python3[94018]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765597527.3431337-36568-142051898755293/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=8e64fb0469c4b53ef15183c0deae983e54273e57 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:45:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:27 compute-0 sudo[94016]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:27 compute-0 podman[94031]: 2025-12-13 03:45:27.998714679 +0000 UTC m=+0.116261622 container init 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:28 compute-0 podman[94031]: 2025-12-13 03:45:27.905228366 +0000 UTC m=+0.022775329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:28 compute-0 podman[94031]: 2025-12-13 03:45:28.005413234 +0000 UTC m=+0.122960177 container start 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:45:28 compute-0 podman[94031]: 2025-12-13 03:45:28.008374529 +0000 UTC m=+0.125921502 container attach 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:45:28 compute-0 beautiful_brown[94047]: 167 167
Dec 13 03:45:28 compute-0 systemd[1]: libpod-7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2.scope: Deactivated successfully.
Dec 13 03:45:28 compute-0 podman[94031]: 2025-12-13 03:45:28.011252022 +0000 UTC m=+0.128798975 container died 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:45:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6c7c676abc6427241cfbeca5b6237e05c408235301f7f2ffc0f440abaab6e45-merged.mount: Deactivated successfully.
Dec 13 03:45:28 compute-0 sudo[94109]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlllbbczodeghayvwlphirgcqymyzqkf ; /usr/bin/python3'
Dec 13 03:45:28 compute-0 sudo[94109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:28 compute-0 ceph-mon[75071]: pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 03:45:28 compute-0 ceph-mon[75071]: Saving service mds.cephfs spec with placement compute-0
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:28 compute-0 podman[94031]: 2025-12-13 03:45:28.379551512 +0000 UTC m=+0.497098455 container remove 7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:45:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:28 compute-0 python3[94111]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:28 compute-0 systemd[1]: libpod-conmon-7bbcbe80349b2d15d6f5939a357b7e86d34753a9fcd127135f7dedd948d984e2.scope: Deactivated successfully.
Dec 13 03:45:28 compute-0 podman[94114]: 2025-12-13 03:45:28.480475851 +0000 UTC m=+0.021840892 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:28 compute-0 podman[94114]: 2025-12-13 03:45:28.775709159 +0000 UTC m=+0.317074170 container create 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:45:28 compute-0 podman[94132]: 2025-12-13 03:45:28.806487309 +0000 UTC m=+0.293473038 container create dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:45:28 compute-0 systemd[1]: Started libpod-conmon-07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c.scope.
Dec 13 03:45:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e1ba1e4621bb57cdc42ad50e2a670595e2b226f201744a36680578cee42d9/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/794e1ba1e4621bb57cdc42ad50e2a670595e2b226f201744a36680578cee42d9/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 systemd[1]: Started libpod-conmon-dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1.scope.
Dec 13 03:45:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:28 compute-0 podman[94132]: 2025-12-13 03:45:28.786883512 +0000 UTC m=+0.273869271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:28 compute-0 podman[94114]: 2025-12-13 03:45:28.881467218 +0000 UTC m=+0.422832249 container init 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:28 compute-0 podman[94114]: 2025-12-13 03:45:28.887356348 +0000 UTC m=+0.428721359 container start 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:29 compute-0 podman[94132]: 2025-12-13 03:45:29.257214073 +0000 UTC m=+0.744199832 container init dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:29 compute-0 podman[94132]: 2025-12-13 03:45:29.264433461 +0000 UTC m=+0.751419190 container start dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:45:29 compute-0 podman[94132]: 2025-12-13 03:45:29.327567058 +0000 UTC m=+0.814552807 container attach dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:29 compute-0 ceph-mon[75071]: pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Dec 13 03:45:29 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3209898196' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 03:45:29 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3209898196' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 03:45:29 compute-0 podman[94114]: 2025-12-13 03:45:29.420931457 +0000 UTC m=+0.962296468 container attach 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:45:29 compute-0 systemd[1]: libpod-07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c.scope: Deactivated successfully.
Dec 13 03:45:29 compute-0 podman[94179]: 2025-12-13 03:45:29.470651175 +0000 UTC m=+0.030007459 container died 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-794e1ba1e4621bb57cdc42ad50e2a670595e2b226f201744a36680578cee42d9-merged.mount: Deactivated successfully.
Dec 13 03:45:29 compute-0 podman[94179]: 2025-12-13 03:45:29.512093134 +0000 UTC m=+0.071449408 container remove 07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c (image=quay.io/ceph/ceph:v20, name=condescending_keldysh, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:29 compute-0 systemd[1]: libpod-conmon-07a130fc05e6f9719873328cc04dd1d64b858e2aaa31668bb9d1487e5ad38e4c.scope: Deactivated successfully.
Dec 13 03:45:29 compute-0 sudo[94109]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:29 compute-0 pensive_villani[94152]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:29 compute-0 pensive_villani[94152]: --> All data devices are unavailable
Dec 13 03:45:29 compute-0 systemd[1]: libpod-dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1.scope: Deactivated successfully.
Dec 13 03:45:29 compute-0 podman[94132]: 2025-12-13 03:45:29.73744037 +0000 UTC m=+1.224426109 container died dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca78a203d93ab899d8ea39f8eed13b387a46df635d5b30026a5f1a9644a04a4-merged.mount: Deactivated successfully.
Dec 13 03:45:30 compute-0 sudo[94245]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abahxcpdanhykoxgjrjamimvbnzbtwom ; /usr/bin/python3'
Dec 13 03:45:30 compute-0 sudo[94245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:30 compute-0 podman[94132]: 2025-12-13 03:45:30.149880447 +0000 UTC m=+1.636866176 container remove dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_villani, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:45:30 compute-0 sudo[93921]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:30 compute-0 python3[94247]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:30 compute-0 systemd[1]: libpod-conmon-dc8a3c7d2b587a825a88f9ebfa0d5a2a96c2bf2af5bf510352b3ee77566c91c1.scope: Deactivated successfully.
Dec 13 03:45:30 compute-0 sudo[94248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:30 compute-0 sudo[94248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:30 compute-0 sudo[94248]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:30 compute-0 sudo[94285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:45:30 compute-0 sudo[94285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:30 compute-0 podman[94272]: 2025-12-13 03:45:30.256109819 +0000 UTC m=+0.023011197 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:30 compute-0 podman[94272]: 2025-12-13 03:45:30.351817506 +0000 UTC m=+0.118718854 container create 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:45:30 compute-0 systemd[1]: Started libpod-conmon-9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f.scope.
Dec 13 03:45:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3209898196' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Dec 13 03:45:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3209898196' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Dec 13 03:45:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b6bd6b7f875b841b31821fdfe9ca8b0ef753c1f65b451ca0e19883204f9b70/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6b6bd6b7f875b841b31821fdfe9ca8b0ef753c1f65b451ca0e19883204f9b70/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:30 compute-0 podman[94272]: 2025-12-13 03:45:30.451199411 +0000 UTC m=+0.218100789 container init 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:30 compute-0 podman[94272]: 2025-12-13 03:45:30.476167293 +0000 UTC m=+0.243068641 container start 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:30 compute-0 podman[94272]: 2025-12-13 03:45:30.66868749 +0000 UTC m=+0.435588938 container attach 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.78935298 +0000 UTC m=+0.079323526 container create c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:30 compute-0 systemd[1]: Started libpod-conmon-c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d.scope.
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.73438966 +0000 UTC m=+0.024360226 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.885424518 +0000 UTC m=+0.175395084 container init c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.89034214 +0000 UTC m=+0.180312686 container start c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:45:30 compute-0 flamboyant_yonath[94368]: 167 167
Dec 13 03:45:30 compute-0 systemd[1]: libpod-c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d.scope: Deactivated successfully.
Dec 13 03:45:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 03:45:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3108139773' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:45:30 compute-0 hopeful_noyce[94315]: 
Dec 13 03:45:30 compute-0 hopeful_noyce[94315]: {"fsid":"437a9f04-06b7-56e3-8a4b-f52a1199dd32","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":135,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":33,"num_osds":3,"num_up_osds":3,"osd_up_since":1765597498,"num_in_osds":3,"osd_in_since":1765597469,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83894272,"bytes_avail":64328032256,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2025-12-13T03:45:25:573679+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T03:44:42.424010+0000","services":{}},"progress_events":{}}
Dec 13 03:45:30 compute-0 systemd[1]: libpod-9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f.scope: Deactivated successfully.
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.994518572 +0000 UTC m=+0.284489138 container attach c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:30 compute-0 podman[94351]: 2025-12-13 03:45:30.994838902 +0000 UTC m=+0.284809468 container died c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:31 compute-0 podman[94272]: 2025-12-13 03:45:31.005016816 +0000 UTC m=+0.771918184 container died 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e7bd36e787c87659013fc2d245926160620c4591e52c0edefa7356741ee7c98-merged.mount: Deactivated successfully.
Dec 13 03:45:31 compute-0 podman[94351]: 2025-12-13 03:45:31.090859848 +0000 UTC m=+0.380830394 container remove c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_yonath, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a6b6bd6b7f875b841b31821fdfe9ca8b0ef753c1f65b451ca0e19883204f9b70-merged.mount: Deactivated successfully.
Dec 13 03:45:31 compute-0 podman[94385]: 2025-12-13 03:45:31.135446428 +0000 UTC m=+0.149841894 container remove 9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f (image=quay.io/ceph/ceph:v20, name=hopeful_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:31 compute-0 systemd[1]: libpod-conmon-9ac173e4dc4307de2e73deef60a26d8383c6a62689dcaa0dc37b510a945df35f.scope: Deactivated successfully.
Dec 13 03:45:31 compute-0 sudo[94245]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:31 compute-0 systemd[1]: libpod-conmon-c9733b11169ff3ad14c32faa38d797f6ce54453ce368c63b613407c4ac39be2d.scope: Deactivated successfully.
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.24516183 +0000 UTC m=+0.042888621 container create b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:31 compute-0 systemd[1]: Started libpod-conmon-b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb.scope.
Dec 13 03:45:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d39c496ccd052b7ff99c10b25cf04fa69d5d60e1080e91c06840cdb5f2e16db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d39c496ccd052b7ff99c10b25cf04fa69d5d60e1080e91c06840cdb5f2e16db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d39c496ccd052b7ff99c10b25cf04fa69d5d60e1080e91c06840cdb5f2e16db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d39c496ccd052b7ff99c10b25cf04fa69d5d60e1080e91c06840cdb5f2e16db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.224939786 +0000 UTC m=+0.022666587 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:31 compute-0 sudo[94451]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elwezlpcbyldsnnxcrydxlyizygpezsw ; /usr/bin/python3'
Dec 13 03:45:31 compute-0 sudo[94451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.436630658 +0000 UTC m=+0.234357479 container init b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:45:31 compute-0 ceph-mon[75071]: pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3108139773' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.446368989 +0000 UTC m=+0.244095790 container start b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:45:31 compute-0 python3[94453]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.485180541 +0000 UTC m=+0.282907362 container attach b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:45:31 compute-0 podman[94456]: 2025-12-13 03:45:31.545710822 +0000 UTC m=+0.055141066 container create 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:45:31 compute-0 systemd[1]: Started libpod-conmon-490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63.scope.
Dec 13 03:45:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:31 compute-0 podman[94456]: 2025-12-13 03:45:31.514445347 +0000 UTC m=+0.023875601 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ce43a2c106d3c1cab990817e8d411c0e6103492d1f65c3352e58a8f751a05e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33ce43a2c106d3c1cab990817e8d411c0e6103492d1f65c3352e58a8f751a05e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]: {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     "0": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "devices": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "/dev/loop3"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             ],
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_name": "ceph_lv0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_size": "21470642176",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "name": "ceph_lv0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "tags": {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.crush_device_class": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.encrypted": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_id": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.vdo": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.with_tpm": "0"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             },
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "vg_name": "ceph_vg0"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         }
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     ],
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     "1": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "devices": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "/dev/loop4"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             ],
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_name": "ceph_lv1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_size": "21470642176",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "name": "ceph_lv1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "tags": {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.crush_device_class": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.encrypted": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_id": "1",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.vdo": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.with_tpm": "0"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             },
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "vg_name": "ceph_vg1"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         }
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     ],
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     "2": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "devices": [
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "/dev/loop5"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             ],
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_name": "ceph_lv2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_size": "21470642176",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "name": "ceph_lv2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "tags": {
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.crush_device_class": "",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.encrypted": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osd_id": "2",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.vdo": "0",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:                 "ceph.with_tpm": "0"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             },
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "type": "block",
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:             "vg_name": "ceph_vg2"
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:         }
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]:     ]
Dec 13 03:45:31 compute-0 agitated_khayyam[94429]: }
Dec 13 03:45:31 compute-0 systemd[1]: libpod-b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb.scope: Deactivated successfully.
Dec 13 03:45:31 compute-0 podman[94456]: 2025-12-13 03:45:31.825493092 +0000 UTC m=+0.334923326 container init 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.826054889 +0000 UTC m=+0.623781710 container died b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:45:31 compute-0 podman[94456]: 2025-12-13 03:45:31.832034862 +0000 UTC m=+0.341465106 container start 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:31 compute-0 podman[94456]: 2025-12-13 03:45:31.845485311 +0000 UTC m=+0.354915545 container attach 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-6d39c496ccd052b7ff99c10b25cf04fa69d5d60e1080e91c06840cdb5f2e16db-merged.mount: Deactivated successfully.
Dec 13 03:45:31 compute-0 podman[94408]: 2025-12-13 03:45:31.878944629 +0000 UTC m=+0.676671440 container remove b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_khayyam, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:31 compute-0 systemd[1]: libpod-conmon-b98f1bfea65c63a149f9febe4b8156abaf9bef764da1d9d0655c57842669f1bb.scope: Deactivated successfully.
Dec 13 03:45:31 compute-0 sudo[94285]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:31 compute-0 sudo[94494]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:31 compute-0 sudo[94494]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:31 compute-0 sudo[94494]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:32 compute-0 sudo[94537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:32 compute-0 sudo[94537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 03:45:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455201656' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:45:32 compute-0 gallant_dewdney[94471]: 
Dec 13 03:45:32 compute-0 gallant_dewdney[94471]: {"epoch":1,"fsid":"437a9f04-06b7-56e3-8a4b-f52a1199dd32","modified":"2025-12-13T03:43:08.228709Z","created":"2025-12-13T03:43:08.228709Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Dec 13 03:45:32 compute-0 gallant_dewdney[94471]: dumped monmap epoch 1
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.28431285 +0000 UTC m=+0.020565745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:32 compute-0 systemd[1]: libpod-490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63.scope: Deactivated successfully.
Dec 13 03:45:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3455201656' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.550419586 +0000 UTC m=+0.286672461 container create 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:45:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:32 compute-0 systemd[1]: Started libpod-conmon-79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501.scope.
Dec 13 03:45:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.703332668 +0000 UTC m=+0.439585553 container init 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.709476446 +0000 UTC m=+0.445729321 container start 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:32 compute-0 optimistic_pasteur[94606]: 167 167
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.712684218 +0000 UTC m=+0.448937113 container attach 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:45:32 compute-0 systemd[1]: libpod-79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501.scope: Deactivated successfully.
Dec 13 03:45:32 compute-0 conmon[94606]: conmon 79cde46cbc6077319ca9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501.scope/container/memory.events
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.714064699 +0000 UTC m=+0.450317574 container died 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:45:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-371a35bb9833a5ae88172f5b217252753eab4b06dc91e4d5976637402299c13f-merged.mount: Deactivated successfully.
Dec 13 03:45:32 compute-0 podman[94575]: 2025-12-13 03:45:32.938831708 +0000 UTC m=+0.675084583 container remove 79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_pasteur, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:45:32 compute-0 systemd[1]: libpod-conmon-79cde46cbc6077319ca96c02ac3b73667b68e911d7fc67e7a894a0534e4f3501.scope: Deactivated successfully.
Dec 13 03:45:33 compute-0 podman[94456]: 2025-12-13 03:45:33.01533064 +0000 UTC m=+1.524760874 container died 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 03:45:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-33ce43a2c106d3c1cab990817e8d411c0e6103492d1f65c3352e58a8f751a05e-merged.mount: Deactivated successfully.
Dec 13 03:45:33 compute-0 podman[94589]: 2025-12-13 03:45:33.310129865 +0000 UTC m=+0.912237091 container remove 490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63 (image=quay.io/ceph/ceph:v20, name=gallant_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:33 compute-0 systemd[1]: libpod-conmon-490627dbe3d86b76ece22b0f29b7412581a896e2cdc6226080b220f97fcaee63.scope: Deactivated successfully.
Dec 13 03:45:33 compute-0 podman[94630]: 2025-12-13 03:45:33.328089695 +0000 UTC m=+0.287369201 container create 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:45:33 compute-0 sudo[94451]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:33 compute-0 systemd[1]: Started libpod-conmon-7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81.scope.
Dec 13 03:45:33 compute-0 podman[94630]: 2025-12-13 03:45:33.296309095 +0000 UTC m=+0.255588621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17be41c9acee8350452b8ee656a0e147812dc1a06b0f716369b67ad54f3bb9c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17be41c9acee8350452b8ee656a0e147812dc1a06b0f716369b67ad54f3bb9c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17be41c9acee8350452b8ee656a0e147812dc1a06b0f716369b67ad54f3bb9c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17be41c9acee8350452b8ee656a0e147812dc1a06b0f716369b67ad54f3bb9c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:33 compute-0 podman[94630]: 2025-12-13 03:45:33.450502015 +0000 UTC m=+0.409781541 container init 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 03:45:33 compute-0 podman[94630]: 2025-12-13 03:45:33.456384035 +0000 UTC m=+0.415663541 container start 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:33 compute-0 podman[94630]: 2025-12-13 03:45:33.466766075 +0000 UTC m=+0.426045601 container attach 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:45:33 compute-0 ceph-mon[75071]: pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:33 compute-0 sudo[94687]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlvtrbupxqmhuirnqlpaihomdlfmgcsg ; /usr/bin/python3'
Dec 13 03:45:33 compute-0 sudo[94687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:33 compute-0 python3[94689]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:33 compute-0 podman[94730]: 2025-12-13 03:45:33.981222842 +0000 UTC m=+0.052785147 container create 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Dec 13 03:45:34 compute-0 systemd[1]: Started libpod-conmon-84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9.scope.
Dec 13 03:45:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89fbb672c961db74c3b02b49916888c56406cb890f2a13685148ba3bfe3ce7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a89fbb672c961db74c3b02b49916888c56406cb890f2a13685148ba3bfe3ce7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:33.961411949 +0000 UTC m=+0.032974284 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:34.06069751 +0000 UTC m=+0.132259835 container init 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:34.068926418 +0000 UTC m=+0.140488713 container start 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:34.072466001 +0000 UTC m=+0.144028306 container attach 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:34 compute-0 lvm[94769]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:34 compute-0 lvm[94771]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:34 compute-0 lvm[94769]: VG ceph_vg0 finished
Dec 13 03:45:34 compute-0 lvm[94771]: VG ceph_vg1 finished
Dec 13 03:45:34 compute-0 lvm[94773]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:34 compute-0 lvm[94773]: VG ceph_vg2 finished
Dec 13 03:45:34 compute-0 trusting_goldwasser[94649]: {}
Dec 13 03:45:34 compute-0 systemd[1]: libpod-7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81.scope: Deactivated successfully.
Dec 13 03:45:34 compute-0 systemd[1]: libpod-7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81.scope: Consumed 1.319s CPU time.
Dec 13 03:45:34 compute-0 podman[94630]: 2025-12-13 03:45:34.269115918 +0000 UTC m=+1.228395424 container died 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-17be41c9acee8350452b8ee656a0e147812dc1a06b0f716369b67ad54f3bb9c7-merged.mount: Deactivated successfully.
Dec 13 03:45:34 compute-0 podman[94630]: 2025-12-13 03:45:34.320430801 +0000 UTC m=+1.279710307 container remove 7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:34 compute-0 systemd[1]: libpod-conmon-7469b042b9e237a3f6ec36be1069da91fcb1ea84ca303a57b60b53d61553ee81.scope: Deactivated successfully.
Dec 13 03:45:34 compute-0 sudo[94537]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:34 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 250f6c44-5741-4cee-8690-03540eb0aa7e (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gnpexe", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gnpexe", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gnpexe", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:34 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.gnpexe on compute-0
Dec 13 03:45:34 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.gnpexe on compute-0
Dec 13 03:45:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:34 compute-0 sudo[94805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:34 compute-0 sudo[94805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:34 compute-0 sudo[94805]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:34 compute-0 sudo[94830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:45:34 compute-0 sudo[94830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Dec 13 03:45:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3795628739' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 03:45:34 compute-0 magical_mcclintock[94759]: [client.openstack]
Dec 13 03:45:34 compute-0 magical_mcclintock[94759]:         key = AQCT4DxpAAAAABAAxBrRSbggkwGSJCw4erm++Q==
Dec 13 03:45:34 compute-0 magical_mcclintock[94759]:         caps mgr = "allow *"
Dec 13 03:45:34 compute-0 magical_mcclintock[94759]:         caps mon = "profile rbd"
Dec 13 03:45:34 compute-0 magical_mcclintock[94759]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Dec 13 03:45:34 compute-0 systemd[1]: libpod-84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9.scope: Deactivated successfully.
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:34.60431804 +0000 UTC m=+0.675880355 container died 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a89fbb672c961db74c3b02b49916888c56406cb890f2a13685148ba3bfe3ce7-merged.mount: Deactivated successfully.
Dec 13 03:45:34 compute-0 podman[94730]: 2025-12-13 03:45:34.64474762 +0000 UTC m=+0.716309935 container remove 84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9 (image=quay.io/ceph/ceph:v20, name=magical_mcclintock, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:34 compute-0 systemd[1]: libpod-conmon-84d936cf1b0012784941d466352aad64138ddabb223d5d2dfc6438c2a07894c9.scope: Deactivated successfully.
Dec 13 03:45:34 compute-0 sudo[94687]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.859793689 +0000 UTC m=+0.036720954 container create 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:45:34 compute-0 systemd[1]: Started libpod-conmon-88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd.scope.
Dec 13 03:45:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.928277959 +0000 UTC m=+0.105205244 container init 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.934769797 +0000 UTC m=+0.111697062 container start 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:34 compute-0 gifted_payne[94925]: 167 167
Dec 13 03:45:34 compute-0 systemd[1]: libpod-88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd.scope: Deactivated successfully.
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.939909465 +0000 UTC m=+0.116836750 container attach 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.844648951 +0000 UTC m=+0.021576246 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.940458561 +0000 UTC m=+0.117385846 container died 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-5011280587d4ea65a0aa9a0e1dbd6488a578a1bea94dc2394d6ec385e24acf80-merged.mount: Deactivated successfully.
Dec 13 03:45:34 compute-0 podman[94909]: 2025-12-13 03:45:34.973883038 +0000 UTC m=+0.150810303 container remove 88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_payne, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:34 compute-0 systemd[1]: libpod-conmon-88ab64f85237100a35c322e65c1952429e21442a3f329174f7f78be698c7d0cd.scope: Deactivated successfully.
Dec 13 03:45:35 compute-0 systemd[1]: Reloading.
Dec 13 03:45:35 compute-0 systemd-rc-local-generator[94970]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:45:35 compute-0 systemd-sysv-generator[94973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:45:35 compute-0 systemd[1]: Reloading.
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gnpexe", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gnpexe", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:35 compute-0 ceph-mon[75071]: Deploying daemon rgw.rgw.compute-0.gnpexe on compute-0
Dec 13 03:45:35 compute-0 ceph-mon[75071]: pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3795628739' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Dec 13 03:45:35 compute-0 systemd-rc-local-generator[95011]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:45:35 compute-0 systemd-sysv-generator[95015]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:45:35 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.gnpexe for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:45:35 compute-0 podman[95118]: 2025-12-13 03:45:35.87683958 +0000 UTC m=+0.040363749 container create 5dc4e803410d0148ba86af2d2bd8b31ea08082e81c207dc87441da7b9136723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-rgw-rgw-compute-0-gnpexe, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda4a41bfb23fd7a95b1a8d1f71f41b495610062e8dc1efaca83de4ac8280b3e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda4a41bfb23fd7a95b1a8d1f71f41b495610062e8dc1efaca83de4ac8280b3e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda4a41bfb23fd7a95b1a8d1f71f41b495610062e8dc1efaca83de4ac8280b3e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda4a41bfb23fd7a95b1a8d1f71f41b495610062e8dc1efaca83de4ac8280b3e/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.gnpexe supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:35 compute-0 podman[95118]: 2025-12-13 03:45:35.927272049 +0000 UTC m=+0.090796238 container init 5dc4e803410d0148ba86af2d2bd8b31ea08082e81c207dc87441da7b9136723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-rgw-rgw-compute-0-gnpexe, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:35 compute-0 podman[95118]: 2025-12-13 03:45:35.935074793 +0000 UTC m=+0.098598962 container start 5dc4e803410d0148ba86af2d2bd8b31ea08082e81c207dc87441da7b9136723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-rgw-rgw-compute-0-gnpexe, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:35 compute-0 bash[95118]: 5dc4e803410d0148ba86af2d2bd8b31ea08082e81c207dc87441da7b9136723a
Dec 13 03:45:35 compute-0 podman[95118]: 2025-12-13 03:45:35.856836651 +0000 UTC m=+0.020360840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:35 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.gnpexe for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:45:35 compute-0 sudo[94830]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:35 compute-0 radosgw[95161]: deferred set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:45:35 compute-0 radosgw[95161]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Dec 13 03:45:35 compute-0 radosgw[95161]: framework: beast
Dec 13 03:45:35 compute-0 radosgw[95161]: framework conf key: endpoint, val: 192.168.122.100:8082
Dec 13 03:45:35 compute-0 radosgw[95161]: init_numa not setting numa affinity
Dec 13 03:45:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:35 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 250f6c44-5741-4cee-8690-03540eb0aa7e (Updating rgw.rgw deployment (+1 -> 1))
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 250f6c44-5741-4cee-8690-03540eb0aa7e (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev a39aec05-2a1e-45e4-a3c3-f7a8bd7120cf (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 03:45:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bszvvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bszvvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bszvvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 03:45:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:36 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.bszvvn on compute-0
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.bszvvn on compute-0
Dec 13 03:45:36 compute-0 sudo[95240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:36 compute-0 sudo[95284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvzlvdiabhjwbqdsdtagjbxiihmuzdvg ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597535.7646458-36640-195189242118563/async_wrapper.py j692368054011 30 /home/zuul/.ansible/tmp/ansible-tmp-1765597535.7646458-36640-195189242118563/AnsiballZ_command.py _'
Dec 13 03:45:36 compute-0 sudo[95240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:36 compute-0 sudo[95284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:36 compute-0 sudo[95240]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:36 compute-0 sudo[95289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32
Dec 13 03:45:36 compute-0 sudo[95289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:36 compute-0 ansible-async_wrapper.py[95288]: Invoked with j692368054011 30 /home/zuul/.ansible/tmp/ansible-tmp-1765597535.7646458-36640-195189242118563/AnsiballZ_command.py _
Dec 13 03:45:36 compute-0 ansible-async_wrapper.py[95316]: Starting module and watcher
Dec 13 03:45:36 compute-0 ansible-async_wrapper.py[95316]: Start watching 95317 (30)
Dec 13 03:45:36 compute-0 ansible-async_wrapper.py[95317]: Start module (95317)
Dec 13 03:45:36 compute-0 ansible-async_wrapper.py[95288]: Return async_wrapper task started.
Dec 13 03:45:36 compute-0 sudo[95284]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:36 compute-0 python3[95318]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:36 compute-0 podman[95336]: 2025-12-13 03:45:36.465166134 +0000 UTC m=+0.045203609 container create 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:36 compute-0 systemd[1]: Started libpod-conmon-73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968.scope.
Dec 13 03:45:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:36 compute-0 podman[95336]: 2025-12-13 03:45:36.442210919 +0000 UTC m=+0.022248414 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e7401040585624a37f039a2c1c8f2650c68c0afed83fae0f31a47e11492d8d/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1e7401040585624a37f039a2c1c8f2650c68c0afed83fae0f31a47e11492d8d/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:36 compute-0 podman[95336]: 2025-12-13 03:45:36.554423745 +0000 UTC m=+0.134461240 container init 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:45:36 compute-0 podman[95336]: 2025-12-13 03:45:36.56188648 +0000 UTC m=+0.141923955 container start 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.565422962 +0000 UTC m=+0.046414413 container create 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:45:36 compute-0 podman[95336]: 2025-12-13 03:45:36.568518072 +0000 UTC m=+0.148555737 container attach 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:36 compute-0 systemd[1]: Started libpod-conmon-97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a.scope.
Dec 13 03:45:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.639192116 +0000 UTC m=+0.120183577 container init 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.547062662 +0000 UTC m=+0.028054133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.644258263 +0000 UTC m=+0.125249714 container start 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.647730133 +0000 UTC m=+0.128721584 container attach 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:45:36 compute-0 frosty_hugle[95392]: 167 167
Dec 13 03:45:36 compute-0 systemd[1]: libpod-97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a.scope: Deactivated successfully.
Dec 13 03:45:36 compute-0 conmon[95392]: conmon 97482851364f55f997bf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a.scope/container/memory.events
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.65041552 +0000 UTC m=+0.131406971 container died 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:45:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e2158325221a8f3181f0de015c9c562f31a4dc3b0c3af45aac661e4a2469fb4-merged.mount: Deactivated successfully.
Dec 13 03:45:36 compute-0 podman[95372]: 2025-12-13 03:45:36.697951215 +0000 UTC m=+0.178942656 container remove 97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:36 compute-0 systemd[1]: libpod-conmon-97482851364f55f997bf38ff3e687468445ba1054b09ee7a874869f57837522a.scope: Deactivated successfully.
Dec 13 03:45:36 compute-0 systemd[1]: Reloading.
Dec 13 03:45:36 compute-0 systemd-rc-local-generator[95452]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:45:36 compute-0 systemd-sysv-generator[95456]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:45:36 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: Saving service rgw.rgw spec with placement compute-0
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bszvvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.bszvvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Dec 13 03:45:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:37 compute-0 ceph-mon[75071]: Deploying daemon mds.cephfs.compute-0.bszvvn on compute-0
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Dec 13 03:45:37 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Dec 13 03:45:37 compute-0 objective_ramanujan[95374]: 
Dec 13 03:45:37 compute-0 objective_ramanujan[95374]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2456594442' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 03:45:37 compute-0 systemd[1]: libpod-73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968.scope: Deactivated successfully.
Dec 13 03:45:37 compute-0 podman[95336]: 2025-12-13 03:45:37.033423836 +0000 UTC m=+0.613461311 container died 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:37 compute-0 systemd[1]: Reloading.
Dec 13 03:45:37 compute-0 systemd-rc-local-generator[95505]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:45:37 compute-0 systemd-sysv-generator[95510]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:45:37 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 4 completed events
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mgr[75360]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Dec 13 03:45:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1e7401040585624a37f039a2c1c8f2650c68c0afed83fae0f31a47e11492d8d-merged.mount: Deactivated successfully.
Dec 13 03:45:37 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.bszvvn for 437a9f04-06b7-56e3-8a4b-f52a1199dd32...
Dec 13 03:45:37 compute-0 podman[95336]: 2025-12-13 03:45:37.331696122 +0000 UTC m=+0.911733597 container remove 73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968 (image=quay.io/ceph/ceph:v20, name=objective_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:45:37 compute-0 systemd[1]: libpod-conmon-73a5da5524a2f8733d33a7db46d45e90bef2a55d23ee129becf100fc249fe968.scope: Deactivated successfully.
Dec 13 03:45:37 compute-0 ansible-async_wrapper.py[95317]: Module complete (95317)
Dec 13 03:45:37 compute-0 sudo[95584]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqwimcqevqvbhajdxvsgeazdsglzkohb ; /usr/bin/python3'
Dec 13 03:45:37 compute-0 sudo[95584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:37 compute-0 podman[95615]: 2025-12-13 03:45:37.533091466 +0000 UTC m=+0.039799812 container create 472bdf145450a5ed29dcfdedec058d5bc78d2012ee1374a218807ff71f94117d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mds-cephfs-compute-0-bszvvn, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:37 compute-0 python3[95592]: ansible-ansible.legacy.async_status Invoked with jid=j692368054011.95288 mode=status _async_dir=/root/.ansible_async
Dec 13 03:45:37 compute-0 sudo[95584]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a508404cbdf58b4703ba8e58c4cf26491456e5878b90bf082c8e0273b0fd69/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a508404cbdf58b4703ba8e58c4cf26491456e5878b90bf082c8e0273b0fd69/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a508404cbdf58b4703ba8e58c4cf26491456e5878b90bf082c8e0273b0fd69/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79a508404cbdf58b4703ba8e58c4cf26491456e5878b90bf082c8e0273b0fd69/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.bszvvn supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:37 compute-0 podman[95615]: 2025-12-13 03:45:37.58892321 +0000 UTC m=+0.095631576 container init 472bdf145450a5ed29dcfdedec058d5bc78d2012ee1374a218807ff71f94117d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mds-cephfs-compute-0-bszvvn, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:45:37 compute-0 podman[95615]: 2025-12-13 03:45:37.596440217 +0000 UTC m=+0.103148563 container start 472bdf145450a5ed29dcfdedec058d5bc78d2012ee1374a218807ff71f94117d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mds-cephfs-compute-0-bszvvn, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:37 compute-0 bash[95615]: 472bdf145450a5ed29dcfdedec058d5bc78d2012ee1374a218807ff71f94117d
Dec 13 03:45:37 compute-0 podman[95615]: 2025-12-13 03:45:37.515654652 +0000 UTC m=+0.022363018 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:37 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.bszvvn for 437a9f04-06b7-56e3-8a4b-f52a1199dd32.
Dec 13 03:45:37 compute-0 ceph-mds[95635]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:45:37 compute-0 ceph-mds[95635]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Dec 13 03:45:37 compute-0 ceph-mds[95635]: main not setting numa affinity
Dec 13 03:45:37 compute-0 ceph-mds[95635]: pidfile_write: ignore empty --pid-file
Dec 13 03:45:37 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mds-cephfs-compute-0-bszvvn[95630]: starting mds.cephfs.compute-0.bszvvn at 
Dec 13 03:45:37 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=0/0 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:37 compute-0 sudo[95289]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e34 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:37 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn Updating MDS map to version 2 from mon.0
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev a39aec05-2a1e-45e4-a3c3-f7a8bd7120cf (Updating mds.cephfs deployment (+1 -> 1))
Dec 13 03:45:37 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event a39aec05-2a1e-45e4-a3c3-f7a8bd7120cf (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Dec 13 03:45:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:37 compute-0 sudo[95699]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuqytgfswyylavwtnptptyttolgkhrlk ; /usr/bin/python3'
Dec 13 03:45:37 compute-0 sudo[95699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:37 compute-0 sudo[95700]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:37 compute-0 sudo[95700]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:37 compute-0 sudo[95700]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:37 compute-0 sudo[95727]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:37 compute-0 sudo[95727]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:37 compute-0 sudo[95727]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:37 compute-0 sudo[95752]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:45:37 compute-0 sudo[95752]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:37 compute-0 python3[95719]: ansible-ansible.legacy.async_status Invoked with jid=j692368054011.95288 mode=cleanup _async_dir=/root/.ansible_async
Dec 13 03:45:37 compute-0 sudo[95699]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Dec 13 03:45:38 compute-0 ceph-mon[75071]: pgmap v81: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='client.14251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:38 compute-0 ceph-mon[75071]: osdmap e34: 3 total, 3 up, 3 in
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2456594442' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2456594442' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Dec 13 03:45:38 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 35 pg[8.0( empty local-lis/les=34/35 n=0 ec=34/34 lis/c=0/0 les/c/f=0/0/0 sis=34) [1] r=0 lpr=34 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:38 compute-0 podman[96383]: 2025-12-13 03:45:38.241700187 +0000 UTC m=+0.050069789 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e3 new map
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2025-12-13T03:45:38:305886+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T03:45:25.573242+0000
                                           modified        2025-12-13T03:45:25.573242+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.bszvvn{-1:14253} state up:standby seq 1 addr [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] compat {c=[1],r=[1],i=[1fff]}]
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn Updating MDS map to version 3 from mon.0
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn Monitors have assigned me to become a standby
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] up:boot
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] as mds.0
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bszvvn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : Cluster is now healthy
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.bszvvn"} v 0)
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.bszvvn"} : dispatch
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e3 all = 0
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e4 new map
Dec 13 03:45:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2025-12-13T03:45:38:313731+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T03:45:25.573242+0000
                                           modified        2025-12-13T03:45:38.313725+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.bszvvn{0:14253} state up:creating seq 1 addr [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn Updating MDS map to version 4 from mon.0
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x1
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x100
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x600
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bszvvn=up:creating}
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x601
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x602
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x603
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x604
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x605
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x606
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x607
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x608
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.cache creating system inode with ino:0x609
Dec 13 03:45:38 compute-0 ceph-mds[95635]: mds.0.4 creating_done
Dec 13 03:45:38 compute-0 sudo[96437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flfnhagcmrtrmoutpxtjjqnctbdtmonr ; /usr/bin/python3'
Dec 13 03:45:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.bszvvn is now active in filesystem cephfs as rank 0
Dec 13 03:45:38 compute-0 sudo[96437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:38 compute-0 podman[96383]: 2025-12-13 03:45:38.348108605 +0000 UTC m=+0.156478217 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:45:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v84: 8 pgs: 1 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Dec 13 03:45:38 compute-0 python3[96440]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:38 compute-0 podman[96482]: 2025-12-13 03:45:38.526985228 +0000 UTC m=+0.036503087 container create dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:38 compute-0 systemd[1]: Started libpod-conmon-dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b.scope.
Dec 13 03:45:38 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c18278042873d1d6d956c266f728aaf4f67f9e90d154693b70797cd5683c75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3c18278042873d1d6d956c266f728aaf4f67f9e90d154693b70797cd5683c75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:38 compute-0 podman[96482]: 2025-12-13 03:45:38.600716489 +0000 UTC m=+0.110234368 container init dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:38 compute-0 podman[96482]: 2025-12-13 03:45:38.509727588 +0000 UTC m=+0.019245467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:38 compute-0 podman[96482]: 2025-12-13 03:45:38.607377492 +0000 UTC m=+0.116895351 container start dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Dec 13 03:45:38 compute-0 podman[96482]: 2025-12-13 03:45:38.610856783 +0000 UTC m=+0.120374662 container attach dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:45:38 compute-0 sudo[95752]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:39 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Dec 13 03:45:39 compute-0 zen_banzai[96515]: 
Dec 13 03:45:39 compute-0 zen_banzai[96515]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2456594442' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Dec 13 03:45:39 compute-0 ceph-mon[75071]: osdmap e35: 3 total, 3 up, 3 in
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mds.? [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] up:boot
Dec 13 03:45:39 compute-0 ceph-mon[75071]: daemon mds.cephfs.compute-0.bszvvn assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: Cluster is now healthy
Dec 13 03:45:39 compute-0 ceph-mon[75071]: fsmap cephfs:0 1 up:standby
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.bszvvn"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: fsmap cephfs:1 {0=cephfs.compute-0.bszvvn=up:creating}
Dec 13 03:45:39 compute-0 ceph-mon[75071]: daemon mds.cephfs.compute-0.bszvvn is now active in filesystem cephfs as rank 0
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:39 compute-0 systemd[1]: libpod-dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b.scope: Deactivated successfully.
Dec 13 03:45:39 compute-0 podman[96482]: 2025-12-13 03:45:39.044907345 +0000 UTC m=+0.554425224 container died dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3c18278042873d1d6d956c266f728aaf4f67f9e90d154693b70797cd5683c75-merged.mount: Deactivated successfully.
Dec 13 03:45:39 compute-0 podman[96482]: 2025-12-13 03:45:39.085675563 +0000 UTC m=+0.595193432 container remove dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b (image=quay.io/ceph/ceph:v20, name=zen_banzai, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:45:39 compute-0 sudo[96645]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:39 compute-0 sudo[96645]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:39 compute-0 systemd[1]: libpod-conmon-dee786f9dce5896da8caab8a76289ca8cf39fe1578a012da1c96ab4a511e6f9b.scope: Deactivated successfully.
Dec 13 03:45:39 compute-0 sudo[96645]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:39 compute-0 sudo[96437]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:39 compute-0 sudo[96682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:45:39 compute-0 sudo[96682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e5 new map
Dec 13 03:45:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2025-12-13T03:45:39:317096+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2025-12-13T03:45:25.573242+0000
                                           modified        2025-12-13T03:45:39.317064+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14253}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14253 members: 14253
                                           [mds.cephfs.compute-0.bszvvn{0:14253} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] up:active
Dec 13 03:45:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.bszvvn=up:active}
Dec 13 03:45:39 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn Updating MDS map to version 5 from mon.0
Dec 13 03:45:39 compute-0 ceph-mds[95635]: mds.0.4 handle_mds_map I am now mds.0.4
Dec 13 03:45:39 compute-0 ceph-mds[95635]: mds.0.4 handle_mds_map state change up:creating --> up:active
Dec 13 03:45:39 compute-0 ceph-mds[95635]: mds.0.4 recovery_done -- successful recovery!
Dec 13 03:45:39 compute-0 ceph-mds[95635]: mds.0.4 active_start
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.447548208 +0000 UTC m=+0.037152295 container create 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:45:39 compute-0 systemd[1]: Started libpod-conmon-5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e.scope.
Dec 13 03:45:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.513226408 +0000 UTC m=+0.102830535 container init 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.519361755 +0000 UTC m=+0.108965852 container start 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:39 compute-0 wizardly_khorana[96740]: 167 167
Dec 13 03:45:39 compute-0 systemd[1]: libpod-5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e.scope: Deactivated successfully.
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.523023351 +0000 UTC m=+0.112627448 container attach 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.523270798 +0000 UTC m=+0.112874905 container died 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.430991429 +0000 UTC m=+0.020595556 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3731f15c9c827b4385eaf49007d651d11ed4360e376abb95eefe59586f1a5c9-merged.mount: Deactivated successfully.
Dec 13 03:45:39 compute-0 podman[96723]: 2025-12-13 03:45:39.55755471 +0000 UTC m=+0.147158827 container remove 5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_khorana, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:39 compute-0 systemd[1]: libpod-conmon-5cd4d2cca9327e5d7f9ebb06e70a14ec4ff1250cb87b5b73a98c754a5853b61e.scope: Deactivated successfully.
Dec 13 03:45:39 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=0/0 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:39 compute-0 podman[96764]: 2025-12-13 03:45:39.703034926 +0000 UTC m=+0.042799608 container create 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:39 compute-0 systemd[1]: Started libpod-conmon-5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0.scope.
Dec 13 03:45:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:39 compute-0 podman[96764]: 2025-12-13 03:45:39.687622501 +0000 UTC m=+0.027387193 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:39 compute-0 podman[96764]: 2025-12-13 03:45:39.791855545 +0000 UTC m=+0.131620237 container init 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 03:45:39 compute-0 podman[96764]: 2025-12-13 03:45:39.798362883 +0000 UTC m=+0.138127565 container start 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:45:39 compute-0 podman[96764]: 2025-12-13 03:45:39.801819394 +0000 UTC m=+0.141584076 container attach 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:39 compute-0 sudo[96809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xownnheqhhgsedyjodzdjubqgbpencmr ; /usr/bin/python3'
Dec 13 03:45:39 compute-0 sudo[96809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:40 compute-0 python3[96811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Dec 13 03:45:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 03:45:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Dec 13 03:45:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Dec 13 03:45:40 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 37 pg[9.0( empty local-lis/les=36/37 n=0 ec=36/36 lis/c=0/0 les/c/f=0/0/0 sis=36) [1] r=0 lpr=36 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:40 compute-0 ceph-mon[75071]: pgmap v84: 8 pgs: 1 unknown, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s wr, 5 op/s
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: osdmap e36: 3 total, 3 up, 3 in
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: mds.? [v2:192.168.122.100:6814/4289527546,v1:192.168.122.100:6815/4289527546] up:active
Dec 13 03:45:40 compute-0 ceph-mon[75071]: fsmap cephfs:1 {0=cephfs.compute-0.bszvvn=up:active}
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.066272851 +0000 UTC m=+0.041650896 container create b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:45:40 compute-0 systemd[1]: Started libpod-conmon-b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e.scope.
Dec 13 03:45:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eefc668262531f31094141e9ed6775c6be1d5ad54978bf1e7f46e0cd54ebfd8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eefc668262531f31094141e9ed6775c6be1d5ad54978bf1e7f46e0cd54ebfd8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.135897904 +0000 UTC m=+0.111275969 container init b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.046322534 +0000 UTC m=+0.021700599 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.143184205 +0000 UTC m=+0.118562250 container start b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.146743338 +0000 UTC m=+0.122121403 container attach b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:45:40 compute-0 dreamy_mirzakhani[96781]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:40 compute-0 dreamy_mirzakhani[96781]: --> All data devices are unavailable
Dec 13 03:45:40 compute-0 systemd[1]: libpod-5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0.scope: Deactivated successfully.
Dec 13 03:45:40 compute-0 podman[96764]: 2025-12-13 03:45:40.305970433 +0000 UTC m=+0.645735115 container died 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-edbca7cbedbccecb29d39ef5b2d2cf9ecff3618c0976147d4b1cc16b5fe76d1e-merged.mount: Deactivated successfully.
Dec 13 03:45:40 compute-0 podman[96764]: 2025-12-13 03:45:40.349694417 +0000 UTC m=+0.689459099 container remove 5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:45:40 compute-0 systemd[1]: libpod-conmon-5f3f0ca112ca451a8b0410265e8d4b4d8c945038d1d1da531368099b8223e2e0.scope: Deactivated successfully.
Dec 13 03:45:40 compute-0 sudo[96682]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v87: 9 pgs: 1 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 03:45:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:45:40
Dec 13 03:45:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:45:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Some PGs (0.111111) are unknown; try again later
Dec 13 03:45:40 compute-0 sudo[96877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:40 compute-0 sudo[96877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:40 compute-0 sudo[96877]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:40 compute-0 sudo[96902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:45:40 compute-0 sudo[96902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:40 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} v 0)
Dec 13 03:45:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 03:45:40 compute-0 sad_wing[96833]: 
Dec 13 03:45:40 compute-0 sad_wing[96833]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Dec 13 03:45:40 compute-0 systemd[1]: libpod-b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e.scope: Deactivated successfully.
Dec 13 03:45:40 compute-0 podman[96814]: 2025-12-13 03:45:40.573745516 +0000 UTC m=+0.549123551 container died b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:45:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Dec 13 03:45:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Dec 13 03:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eefc668262531f31094141e9ed6775c6be1d5ad54978bf1e7f46e0cd54ebfd8-merged.mount: Deactivated successfully.
Dec 13 03:45:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Dec 13 03:45:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Dec 13 03:45:41 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 03:45:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Dec 13 03:45:41 compute-0 ceph-mon[75071]: osdmap e37: 3 total, 3 up, 3 in
Dec 13 03:45:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 03:45:41 compute-0 podman[96814]: 2025-12-13 03:45:41.176968401 +0000 UTC m=+1.152346446 container remove b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e (image=quay.io/ceph/ceph:v20, name=sad_wing, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:41 compute-0 sudo[96809]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:41 compute-0 systemd[1]: libpod-conmon-b8f2d0ae0d29b3b3c47323d964ac8d2329a3012f5b502e00336ead906efca46e.scope: Deactivated successfully.
Dec 13 03:45:41 compute-0 ansible-async_wrapper.py[95316]: Done in kid B.
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.303812188 +0000 UTC m=+0.047607157 container create e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:41 compute-0 systemd[1]: Started libpod-conmon-e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2.scope.
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.28344315 +0000 UTC m=+0.027238159 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:41 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=0/0 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.395956763 +0000 UTC m=+0.139751762 container init e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.401092792 +0000 UTC m=+0.144887761 container start e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 03:45:41 compute-0 lucid_galileo[96972]: 167 167
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.405443447 +0000 UTC m=+0.149238436 container attach e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:41 compute-0 systemd[1]: libpod-e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2.scope: Deactivated successfully.
Dec 13 03:45:41 compute-0 conmon[96972]: conmon e2232b14ff7ec04d331c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2.scope/container/memory.events
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.406586651 +0000 UTC m=+0.150381620 container died e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 03:45:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e5179b91fd1d0d93c8f50a8f194d385de3d473947c25afcae9095d784abc3ed-merged.mount: Deactivated successfully.
Dec 13 03:45:41 compute-0 podman[96955]: 2025-12-13 03:45:41.448623856 +0000 UTC m=+0.192418825 container remove e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_galileo, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:45:41 compute-0 systemd[1]: libpod-conmon-e2232b14ff7ec04d331cad4fb88710977b0d22146fc3bdef6496037bb39710f2.scope: Deactivated successfully.
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.605121421 +0000 UTC m=+0.047761582 container create 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:41 compute-0 systemd[1]: Started libpod-conmon-05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0.scope.
Dec 13 03:45:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30eadfef53f7dd0445556d13619c2685da0cae29c74f07b9167a7083d89b95b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30eadfef53f7dd0445556d13619c2685da0cae29c74f07b9167a7083d89b95b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30eadfef53f7dd0445556d13619c2685da0cae29c74f07b9167a7083d89b95b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f30eadfef53f7dd0445556d13619c2685da0cae29c74f07b9167a7083d89b95b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.679805541 +0000 UTC m=+0.122445722 container init 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.58878978 +0000 UTC m=+0.031429971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.687238037 +0000 UTC m=+0.129878198 container start 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.691116078 +0000 UTC m=+0.133756269 container attach 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:41 compute-0 gracious_liskov[97011]: {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     "0": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "devices": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "/dev/loop3"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             ],
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_name": "ceph_lv0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_size": "21470642176",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "name": "ceph_lv0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "tags": {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.crush_device_class": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.encrypted": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_id": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.vdo": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.with_tpm": "0"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             },
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "vg_name": "ceph_vg0"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         }
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     ],
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     "1": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "devices": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "/dev/loop4"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             ],
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_name": "ceph_lv1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_size": "21470642176",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "name": "ceph_lv1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "tags": {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.crush_device_class": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.encrypted": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_id": "1",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.vdo": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.with_tpm": "0"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             },
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "vg_name": "ceph_vg1"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         }
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     ],
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     "2": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "devices": [
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "/dev/loop5"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             ],
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_name": "ceph_lv2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_size": "21470642176",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "name": "ceph_lv2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "tags": {
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.crush_device_class": "",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.encrypted": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osd_id": "2",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.vdo": "0",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:                 "ceph.with_tpm": "0"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             },
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "type": "block",
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:             "vg_name": "ceph_vg2"
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:         }
Dec 13 03:45:41 compute-0 gracious_liskov[97011]:     ]
Dec 13 03:45:41 compute-0 gracious_liskov[97011]: }
Dec 13 03:45:41 compute-0 systemd[1]: libpod-05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0.scope: Deactivated successfully.
Dec 13 03:45:41 compute-0 podman[96994]: 2025-12-13 03:45:41.991435053 +0000 UTC m=+0.434075214 container died 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-f30eadfef53f7dd0445556d13619c2685da0cae29c74f07b9167a7083d89b95b-merged.mount: Deactivated successfully.
Dec 13 03:45:42 compute-0 podman[96994]: 2025-12-13 03:45:42.040322967 +0000 UTC m=+0.482963138 container remove 05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:42 compute-0 systemd[1]: libpod-conmon-05c8b1debec1a32a2bbe91a37f6bcf050ced627dba57ef10490590df6aff5fe0.scope: Deactivated successfully.
Dec 13 03:45:42 compute-0 sudo[96902]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:42 compute-0 sudo[97058]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqipzbbfbirrpdwoedmtccvruzilehki ; /usr/bin/python3'
Dec 13 03:45:42 compute-0 sudo[97058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Dec 13 03:45:42 compute-0 sudo[97052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:42 compute-0 sudo[97052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 03:45:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Dec 13 03:45:42 compute-0 sudo[97052]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Dec 13 03:45:42 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 39 pg[10.0( empty local-lis/les=38/39 n=0 ec=38/38 lis/c=0/0 les/c/f=0/0/0 sis=38) [2] r=0 lpr=38 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:42 compute-0 ceph-mon[75071]: pgmap v87: 9 pgs: 1 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 5.2 KiB/s wr, 14 op/s
Dec 13 03:45:42 compute-0 ceph-mon[75071]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:42 compute-0 ceph-mon[75071]: osdmap e38: 3 total, 3 up, 3 in
Dec 13 03:45:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Dec 13 03:45:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Dec 13 03:45:42 compute-0 ceph-mon[75071]: osdmap e39: 3 total, 3 up, 3 in
Dec 13 03:45:42 compute-0 sudo[97082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:42 compute-0 sudo[97082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 7.296785321204296e-07 of space, bias 4.0, pg target 0.0008756142385445154 quantized to 16 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Dec 13 03:45:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:45:42 compute-0 python3[97079]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 5 completed events
Dec 13 03:45:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:45:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.364867113 +0000 UTC m=+0.047424303 container create 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:45:42 compute-0 systemd[1]: Started libpod-conmon-550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d.scope.
Dec 13 03:45:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v90: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb5ea4eaa74c9d1dc3d505adfebc244f833fa5efb6e2e2430a9adb25880658/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19bb5ea4eaa74c9d1dc3d505adfebc244f833fa5efb6e2e2430a9adb25880658/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.34646787 +0000 UTC m=+0.029025080 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.495810328 +0000 UTC m=+0.178367508 container init 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.502219754 +0000 UTC m=+0.184776934 container start 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.522887662 +0000 UTC m=+0.205444852 container attach 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.621994657 +0000 UTC m=+0.063233569 container create fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:45:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:42 compute-0 systemd[1]: Started libpod-conmon-fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9.scope.
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.579590681 +0000 UTC m=+0.020829613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.687653107 +0000 UTC m=+0.128892029 container init fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.693857636 +0000 UTC m=+0.135096548 container start fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:45:42 compute-0 pensive_yonath[97177]: 167 167
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.697617524 +0000 UTC m=+0.138856486 container attach fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 03:45:42 compute-0 systemd[1]: libpod-fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9.scope: Deactivated successfully.
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.705731789 +0000 UTC m=+0.146970701 container died fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ed35e0ae0961fadcb11fb3dec82492cf9360c9820f608ae0d85b65f437d31f7-merged.mount: Deactivated successfully.
Dec 13 03:45:42 compute-0 podman[97141]: 2025-12-13 03:45:42.744158331 +0000 UTC m=+0.185397243 container remove fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_yonath, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 03:45:42 compute-0 systemd[1]: libpod-conmon-fa2cf5e1d0da3394833ae911e27f42f90abd94accc294efc49fefe1b9fd531d9.scope: Deactivated successfully.
Dec 13 03:45:42 compute-0 podman[97201]: 2025-12-13 03:45:42.887567887 +0000 UTC m=+0.040444420 container create b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:45:42 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:42 compute-0 nifty_brattain[97125]: 
Dec 13 03:45:42 compute-0 nifty_brattain[97125]: [{"container_id": "6b718116b43a", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.19%", "created": "2025-12-13T03:44:15.124121Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-12-13T03:44:15.188908Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004290Z", "memory_usage": 7795113, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2025-12-13T03:44:15.029173Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@crash.compute-0", "version": "20.2.0"}, {"container_id": "472bdf145450", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "8.28%", "created": "2025-12-13T03:45:37.608373Z", "daemon_id": "cephfs.compute-0.bszvvn", "daemon_name": "mds.cephfs.compute-0.bszvvn", "daemon_type": "mds", "events": ["2025-12-13T03:45:37.664921Z daemon:mds.cephfs.compute-0.bszvvn [INFO] \"Deployed mds.cephfs.compute-0.bszvvn on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004643Z", "memory_usage": 15938355, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2025-12-13T03:45:37.519509Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mds.cephfs.compute-0.bszvvn", "version": "20.2.0"}, {"container_id": "d213c0a51889", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "16.74%", "created": "2025-12-13T03:43:17.465503Z", "daemon_id": "compute-0.gsxkyu", "daemon_name": "mgr.compute-0.gsxkyu", "daemon_type": "mgr", "events": ["2025-12-13T03:44:20.904503Z daemon:mgr.compute-0.gsxkyu [INFO] \"Reconfigured mgr.compute-0.gsxkyu on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004222Z", "memory_usage": 547251814, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-12-13T03:43:17.360221Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mgr.compute-0.gsxkyu", "version": "20.2.0"}, {"container_id": "8aaf8457121f", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.43%", "created": "2025-12-13T03:43:11.331336Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-12-13T03:44:19.981482Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004121Z", "memory_request": 2147483648, "memory_usage": 41785753, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2025-12-13T03:43:14.698555Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@mon.compute-0", "version": "20.2.0"}, {"container_id": "9d036f4f40ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.75%", "created": "2025-12-13T03:44:37.589735Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-12-13T03:44:37.651027Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004357Z", "memory_request": 4294967296, "memory_usage": 58143539, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T03:44:37.480599Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@osd.0", "version": "20.2.0"}, {"container_id": "e1d4fc0ce990", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.80%", "created": "2025-12-13T03:44:42.676872Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-12-13T03:44:42.784689Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004423Z", "memory_request": 4294967296, "memory_usage": 58898513, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T03:44:42.541950Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@osd.1", "version": "20.2.0"}, {"container_id": "404dfe1b382d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.92%", "created": "2025-12-13T03:44:50.031071Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-12-13T03:44:50.178314Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004489Z", "memory_request": 4294967296, "memory_usage": 58961428, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-12-13T03:44:48.957903Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@osd.2", "version": "20.2.0"}, {"container_id": "5dc4e803410d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "5.38%", "created": "2025-12-13T03:45:35.947326Z", "daemon_id": "rgw.compute-0.gnpexe", "daemon_name": "rgw.rgw.compute-0.gnpexe", "daemon_type": "rgw", "events": ["2025-12-13T03:45:36.004309Z daemon:rgw.rgw.compute-0.gnpexe [INFO] \"Deployed rgw.rgw.compute-0.gnpexe on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2025-12-13T03:45:39.004555Z", "memory_usage": 54536437, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2025-12-13T03:45:35.862875Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32@rgw.rgw.compute-0.gnpexe", "version": "20.2.0"}]
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.919547873 +0000 UTC m=+0.602105043 container died 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:45:42 compute-0 systemd[1]: Started libpod-conmon-b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde.scope.
Dec 13 03:45:42 compute-0 systemd[1]: libpod-550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d.scope: Deactivated successfully.
Dec 13 03:45:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-19bb5ea4eaa74c9d1dc3d505adfebc244f833fa5efb6e2e2430a9adb25880658-merged.mount: Deactivated successfully.
Dec 13 03:45:42 compute-0 podman[97109]: 2025-12-13 03:45:42.953265678 +0000 UTC m=+0.635822858 container remove 550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d (image=quay.io/ceph/ceph:v20, name=nifty_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:45:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:42 compute-0 systemd[1]: libpod-conmon-550d749646b6320e24be930a8c6eb2652b19c2f219d2d57290902e897bd5b40d.scope: Deactivated successfully.
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa92b36e13effc1505286bb1451ba24cbd2d79c633579b175cd7945b323b9dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa92b36e13effc1505286bb1451ba24cbd2d79c633579b175cd7945b323b9dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa92b36e13effc1505286bb1451ba24cbd2d79c633579b175cd7945b323b9dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8aa92b36e13effc1505286bb1451ba24cbd2d79c633579b175cd7945b323b9dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:42 compute-0 podman[97201]: 2025-12-13 03:45:42.868850677 +0000 UTC m=+0.021727240 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:42 compute-0 podman[97201]: 2025-12-13 03:45:42.9682211 +0000 UTC m=+0.121097643 container init b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:42 compute-0 sudo[97058]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:42 compute-0 podman[97201]: 2025-12-13 03:45:42.977456897 +0000 UTC m=+0.130333430 container start b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:42 compute-0 podman[97201]: 2025-12-13 03:45:42.980451433 +0000 UTC m=+0.133327966 container attach b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:43 compute-0 rsyslogd[1004]: message too long (8842) with configured size 8096, begin of message is: [{"container_id": "6b718116b43a", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Dec 13 03:45:43 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 1ea94ac3-8569-496d-aa93-05c8263c5653 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:43 compute-0 ceph-mon[75071]: osdmap e40: 3 total, 3 up, 3 in
Dec 13 03:45:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Dec 13 03:45:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:43 compute-0 ceph-mds[95635]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 03:45:43 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mds-cephfs-compute-0-bszvvn[95630]: 2025-12-13T03:45:43.326+0000 7f5852e1a640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Dec 13 03:45:43 compute-0 lvm[97309]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:43 compute-0 lvm[97311]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:43 compute-0 lvm[97309]: VG ceph_vg0 finished
Dec 13 03:45:43 compute-0 lvm[97311]: VG ceph_vg1 finished
Dec 13 03:45:43 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=0/0 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:43 compute-0 lvm[97313]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:43 compute-0 lvm[97313]: VG ceph_vg2 finished
Dec 13 03:45:43 compute-0 lvm[97314]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:43 compute-0 lvm[97314]: VG ceph_vg0 finished
Dec 13 03:45:43 compute-0 vigorous_galileo[97226]: {}
Dec 13 03:45:43 compute-0 systemd[1]: libpod-b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde.scope: Deactivated successfully.
Dec 13 03:45:43 compute-0 systemd[1]: libpod-b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde.scope: Consumed 1.255s CPU time.
Dec 13 03:45:43 compute-0 podman[97318]: 2025-12-13 03:45:43.819435616 +0000 UTC m=+0.024465679 container died b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8aa92b36e13effc1505286bb1451ba24cbd2d79c633579b175cd7945b323b9dd-merged.mount: Deactivated successfully.
Dec 13 03:45:43 compute-0 sudo[97351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezxjrwrhnzwtllgdcubiuvcgtoddrlpl ; /usr/bin/python3'
Dec 13 03:45:43 compute-0 sudo[97351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:43 compute-0 podman[97318]: 2025-12-13 03:45:43.85865595 +0000 UTC m=+0.063686033 container remove b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:43 compute-0 systemd[1]: libpod-conmon-b3d5ba5bee60cc9a74b73fd81207fca1c492c70180dbfd0f16030c17f7438bde.scope: Deactivated successfully.
Dec 13 03:45:43 compute-0 sudo[97082]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:43 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 python3[97357]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:44 compute-0 sudo[97358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:44 compute-0 sudo[97358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:44 compute-0 sudo[97358]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.061808564 +0000 UTC m=+0.043207640 container create 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:44 compute-0 systemd[1]: Started libpod-conmon-5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47.scope.
Dec 13 03:45:44 compute-0 sudo[97396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:44 compute-0 sudo[97396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:44 compute-0 sudo[97396]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9242b27f2d07dc23fa2089ac70f286b7372ff08dd2142943934e354264740cb6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9242b27f2d07dc23fa2089ac70f286b7372ff08dd2142943934e354264740cb6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.04194103 +0000 UTC m=+0.023340136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.145288928 +0000 UTC m=+0.126688024 container init 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.153521567 +0000 UTC m=+0.134920643 container start 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.156475652 +0000 UTC m=+0.137874728 container attach 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Dec 13 03:45:44 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev bb38c43d-4439-4a3c-9d5f-b2822e2fffd8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 03:45:44 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 41 pg[11.0( empty local-lis/les=40/41 n=0 ec=40/40 lis/c=0/0 les/c/f=0/0/0 sis=40) [1] r=0 lpr=40 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:44 compute-0 sudo[97427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:45:44 compute-0 sudo[97427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: pgmap v90: 10 pgs: 2 unknown, 8 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.7 KiB/s wr, 3 op/s
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:44 compute-0 ceph-mon[75071]: osdmap e41: 3 total, 3 up, 3 in
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Dec 13 03:45:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:44 compute-0 podman[97517]: 2025-12-13 03:45:44.570205147 +0000 UTC m=+0.044724545 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Dec 13 03:45:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668180549' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:45:44 compute-0 mystifying_nash[97423]: 
Dec 13 03:45:44 compute-0 mystifying_nash[97423]: {"fsid":"437a9f04-06b7-56e3-8a4b-f52a1199dd32","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":149,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":41,"num_osds":3,"num_up_osds":3,"osd_up_since":1765597498,"num_in_osds":3,"osd_in_since":1765597469,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":8},{"state_name":"unknown","count":2}],"num_pgs":10,"num_pools":10,"num_objects":29,"data_bytes":463390,"bytes_used":83955712,"bytes_avail":64327970816,"bytes_total":64411926528,"unknown_pgs_ratio":0.20000000298023224,"read_bytes_sec":1279,"write_bytes_sec":1791,"read_op_per_sec":0,"write_op_per_sec":2},"fsmap":{"epoch":5,"btime":"2025-12-13T03:45:39:317096+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.bszvvn","status":"up:active","gid":14253}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-12-13T03:44:42.424010+0000","services":{}},"progress_events":{"dab29819-9889-41b8-9ccd-0e48e340257c":{"message":"Global Recovery Event (5s)\n      [======================......] (remaining: 1s)","progress":0.80000001192092896,"add_to_ceph_s":true}}}
Dec 13 03:45:44 compute-0 podman[97517]: 2025-12-13 03:45:44.669449816 +0000 UTC m=+0.143969184 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:44 compute-0 systemd[1]: libpod-5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47.scope: Deactivated successfully.
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.684207544 +0000 UTC m=+0.665606630 container died 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:45:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-9242b27f2d07dc23fa2089ac70f286b7372ff08dd2142943934e354264740cb6-merged.mount: Deactivated successfully.
Dec 13 03:45:44 compute-0 podman[97379]: 2025-12-13 03:45:44.720530994 +0000 UTC m=+0.701930060 container remove 5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47 (image=quay.io/ceph/ceph:v20, name=mystifying_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:45:44 compute-0 systemd[1]: libpod-conmon-5564a3893228611d61aa3eed3881bd45b2cec29ee447e7092b861b6f9b911b47.scope: Deactivated successfully.
Dec 13 03:45:44 compute-0 sudo[97351]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Dec 13 03:45:45 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 405f74ad-d618-4af1-8c85-92fb96ec0e20 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1668180549' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2089543240' entity='client.rgw.rgw.compute-0.gnpexe' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:45 compute-0 ceph-mon[75071]: osdmap e42: 3 total, 3 up, 3 in
Dec 13 03:45:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:45 compute-0 sudo[97427]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:45 compute-0 radosgw[95161]: v1 topic migration: starting v1 topic migration..
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:45 compute-0 radosgw[95161]: v1 topic migration: finished v1 topic migration
Dec 13 03:45:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:45:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:45 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=10.341436386s) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active pruub 65.527229309s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:45 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=19/20 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=10.341436386s) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown pruub 65.527229309s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:45 compute-0 radosgw[95161]: framework: beast
Dec 13 03:45:45 compute-0 radosgw[95161]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Dec 13 03:45:45 compute-0 radosgw[95161]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Dec 13 03:45:45 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42 pruub=11.294068336s) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active pruub 72.015068054s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:45 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 42 pg[3.0( empty local-lis/les=20/21 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42 pruub=11.294068336s) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown pruub 72.015068054s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:45 compute-0 radosgw[95161]: starting handler: beast
Dec 13 03:45:45 compute-0 sudo[97749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:45 compute-0 sudo[97749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:45 compute-0 radosgw[95161]: set uid:gid to 167:167 (ceph:ceph)
Dec 13 03:45:45 compute-0 sudo[97749]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:45 compute-0 radosgw[95161]: mgrc service_daemon_register rgw.14256 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.gnpexe,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025,kernel_version=5.14.0-648.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864300,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=36754e4a-24c9-4304-a23f-567ba4e9798a,zone_name=default,zonegroup_id=6d9cadec-81e7-40be-9364-391228e4b779,zonegroup_name=default}
Dec 13 03:45:45 compute-0 sudo[97777]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:45:45 compute-0 sudo[97777]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:45 compute-0 sudo[97825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-detqrwoqivkhuzcrabelkbkotnxuhlso ; /usr/bin/python3'
Dec 13 03:45:45 compute-0 sudo[97825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:45 compute-0 python3[97827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:45 compute-0 podman[97840]: 2025-12-13 03:45:45.778671173 +0000 UTC m=+0.028521106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:45 compute-0 podman[97841]: 2025-12-13 03:45:45.786990924 +0000 UTC m=+0.029536045 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:45 compute-0 podman[97840]: 2025-12-13 03:45:45.945758315 +0000 UTC m=+0.195608248 container create 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 03:45:45 compute-0 systemd[1]: Started libpod-conmon-8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92.scope.
Dec 13 03:45:45 compute-0 podman[97841]: 2025-12-13 03:45:45.986797522 +0000 UTC m=+0.229342843 container create 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:46 compute-0 systemd[1]: Started libpod-conmon-15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832.scope.
Dec 13 03:45:46 compute-0 podman[97840]: 2025-12-13 03:45:46.024725779 +0000 UTC m=+0.274575712 container init 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d26f0822d91ee71675f18098990e1371ec73bd459fe25a41a3db35973a024c5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d26f0822d91ee71675f18098990e1371ec73bd459fe25a41a3db35973a024c5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 podman[97840]: 2025-12-13 03:45:46.033165903 +0000 UTC m=+0.283015826 container start 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:46 compute-0 podman[97840]: 2025-12-13 03:45:46.036709365 +0000 UTC m=+0.286559288 container attach 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:46 compute-0 systemd[1]: libpod-8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92.scope: Deactivated successfully.
Dec 13 03:45:46 compute-0 eloquent_shamir[97870]: 167 167
Dec 13 03:45:46 compute-0 conmon[97870]: conmon 8572398febfeb2772f77 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92.scope/container/memory.events
Dec 13 03:45:46 compute-0 podman[97841]: 2025-12-13 03:45:46.04586701 +0000 UTC m=+0.288412141 container init 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:45:46 compute-0 podman[97840]: 2025-12-13 03:45:46.048776785 +0000 UTC m=+0.298626708 container died 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:45:46 compute-0 podman[97841]: 2025-12-13 03:45:46.051531954 +0000 UTC m=+0.294077065 container start 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:46 compute-0 podman[97841]: 2025-12-13 03:45:46.064449437 +0000 UTC m=+0.306994528 container attach 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d585990b06801ace8fc6a439d737fb664162f0274a4a548d4e045054bcc7a47-merged.mount: Deactivated successfully.
Dec 13 03:45:46 compute-0 podman[97840]: 2025-12-13 03:45:46.100423998 +0000 UTC m=+0.350273931 container remove 8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_shamir, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:45:46 compute-0 systemd[1]: libpod-conmon-8572398febfeb2772f77a231c38e28cfc0fc3c44e314446fa5b4802c93bd2b92.scope: Deactivated successfully.
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Dec 13 03:45:46 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 12d56e5e-b5ca-4bed-8a42-e36c0f90cd94 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=19/20 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=20/21 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1c( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.0( empty local-lis/les=42/43 n=0 ec=18/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.e( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.12( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.10( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.14( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.1a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 43 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=19/19 les/c/f=20/20/0 sis=42) [2] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1a( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.19( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.4( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.b( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=42/43 n=0 ec=20/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.2( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.d( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.14( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.13( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 43 pg[3.10( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=20/20 les/c/f=21/21/0 sis=42) [1] r=0 lpr=42 pi=[20,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:46 compute-0 ceph-mon[75071]: pgmap v93: 11 pgs: 1 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:46 compute-0 ceph-mon[75071]: osdmap e43: 3 total, 3 up, 3 in
Dec 13 03:45:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.271171485 +0000 UTC m=+0.047764822 container create ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:45:46 compute-0 systemd[1]: Started libpod-conmon-ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739.scope.
Dec 13 03:45:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.249572341 +0000 UTC m=+0.026165688 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v96: 73 pgs: 63 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.471674994 +0000 UTC m=+0.248268421 container init ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Dec 13 03:45:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2901257270' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:45:46 compute-0 intelligent_driscoll[97875]: 
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.477818642 +0000 UTC m=+0.254411969 container start ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.487292475 +0000 UTC m=+0.263885802 container attach ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:45:46 compute-0 intelligent_driscoll[97875]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.gnpexe","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Dec 13 03:45:46 compute-0 systemd[1]: libpod-15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832.scope: Deactivated successfully.
Dec 13 03:45:46 compute-0 podman[97945]: 2025-12-13 03:45:46.53100419 +0000 UTC m=+0.021454672 container died 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-0d26f0822d91ee71675f18098990e1371ec73bd459fe25a41a3db35973a024c5-merged.mount: Deactivated successfully.
Dec 13 03:45:46 compute-0 podman[97945]: 2025-12-13 03:45:46.579902053 +0000 UTC m=+0.070352525 container remove 15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832 (image=quay.io/ceph/ceph:v20, name=intelligent_driscoll, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 03:45:46 compute-0 systemd[1]: libpod-conmon-15af989736cdd129a6acb81901401ade726d4162b154e69c468d6c6430da4832.scope: Deactivated successfully.
Dec 13 03:45:46 compute-0 sudo[97825]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Dec 13 03:45:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Dec 13 03:45:46 compute-0 zen_tharp[97938]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:45:46 compute-0 zen_tharp[97938]: --> All data devices are unavailable
Dec 13 03:45:46 compute-0 systemd[1]: libpod-ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739.scope: Deactivated successfully.
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.934070306 +0000 UTC m=+0.710663653 container died ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec 13 03:45:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d8a728e2dd3c4033987f798d3e02921e2ab5127a3d2bb885dca5ac4963e76bc-merged.mount: Deactivated successfully.
Dec 13 03:45:46 compute-0 podman[97921]: 2025-12-13 03:45:46.974445003 +0000 UTC m=+0.751038330 container remove ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_tharp, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:46 compute-0 systemd[1]: libpod-conmon-ff8d7d7ef58cdfa695d9626db49ecb1d78bf2eba9899295c45da42fb9881d739.scope: Deactivated successfully.
Dec 13 03:45:47 compute-0 sudo[97777]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:47 compute-0 sudo[97986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:47 compute-0 sudo[97986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:47 compute-0 sudo[97986]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:47 compute-0 sudo[98011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:45:47 compute-0 sudo[98011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Dec 13 03:45:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Dec 13 03:45:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Dec 13 03:45:47 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev ba777b1f-0b76-49ec-b2b1-6b34193f03bb (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 13 03:45:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:47 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=12.785944939s) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active pruub 69.746971130s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:47 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=12.785944939s) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown pruub 69.746971130s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2901257270' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Dec 13 03:45:47 compute-0 ceph-mon[75071]: 3.1f scrub starts
Dec 13 03:45:47 compute-0 ceph-mon[75071]: 3.1f scrub ok
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:47 compute-0 ceph-mon[75071]: osdmap e44: 3 total, 3 up, 3 in
Dec 13 03:45:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.385284214 +0000 UTC m=+0.037015881 container create e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:45:47 compute-0 sudo[98084]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlqnoxyzvlsrurtfgjazjufmebgsrjv ; /usr/bin/python3'
Dec 13 03:45:47 compute-0 sudo[98084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:47 compute-0 systemd[1]: Started libpod-conmon-e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68.scope.
Dec 13 03:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.458588954 +0000 UTC m=+0.110320631 container init e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.465939826 +0000 UTC m=+0.117671493 container start e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.368621842 +0000 UTC m=+0.020353539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:47 compute-0 nifty_yonath[98090]: 167 167
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.468905912 +0000 UTC m=+0.120637579 container attach e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.469892 +0000 UTC m=+0.121623667 container died e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:45:47 compute-0 systemd[1]: libpod-e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68.scope: Deactivated successfully.
Dec 13 03:45:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-c3c6ee978d64119a522ce6c657795464e725a6cfd89c10de23191c4669ed26cd-merged.mount: Deactivated successfully.
Dec 13 03:45:47 compute-0 podman[98047]: 2025-12-13 03:45:47.507374464 +0000 UTC m=+0.159106131 container remove e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_yonath, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:45:47 compute-0 systemd[1]: libpod-conmon-e17fb6b36ff093c855b1a4f1c1764bdfd543066f7ef45a44a0664aac9a451a68.scope: Deactivated successfully.
Dec 13 03:45:47 compute-0 python3[98089]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:47 compute-0 podman[98111]: 2025-12-13 03:45:47.586150012 +0000 UTC m=+0.039408521 container create cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:47 compute-0 systemd[1]: Started libpod-conmon-cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421.scope.
Dec 13 03:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb9d900089ee9739cd4e4fd5f70b0daddebae4e1ccb47de8faf12e3e071779b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fb9d900089ee9739cd4e4fd5f70b0daddebae4e1ccb47de8faf12e3e071779b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e44 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:47 compute-0 podman[98131]: 2025-12-13 03:45:47.652891542 +0000 UTC m=+0.038617758 container create 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:45:47 compute-0 podman[98111]: 2025-12-13 03:45:47.658400031 +0000 UTC m=+0.111658550 container init cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:45:47 compute-0 podman[98111]: 2025-12-13 03:45:47.567127533 +0000 UTC m=+0.020386062 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:47 compute-0 podman[98111]: 2025-12-13 03:45:47.663477418 +0000 UTC m=+0.116735927 container start cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:47 compute-0 podman[98111]: 2025-12-13 03:45:47.667197846 +0000 UTC m=+0.120456355 container attach cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 03:45:47 compute-0 systemd[1]: Started libpod-conmon-4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d.scope.
Dec 13 03:45:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7def594ba699c6e7cab85bf5b649b58fc3b067f673a24c963a1530e2f8f0c194/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7def594ba699c6e7cab85bf5b649b58fc3b067f673a24c963a1530e2f8f0c194/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7def594ba699c6e7cab85bf5b649b58fc3b067f673a24c963a1530e2f8f0c194/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7def594ba699c6e7cab85bf5b649b58fc3b067f673a24c963a1530e2f8f0c194/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:47 compute-0 podman[98131]: 2025-12-13 03:45:47.635964093 +0000 UTC m=+0.021690329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:47 compute-0 podman[98131]: 2025-12-13 03:45:47.737917262 +0000 UTC m=+0.123643538 container init 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:47 compute-0 podman[98131]: 2025-12-13 03:45:47.748894929 +0000 UTC m=+0.134621175 container start 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:45:47 compute-0 podman[98131]: 2025-12-13 03:45:47.753129671 +0000 UTC m=+0.138855947 container attach 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:45:47 compute-0 lucid_villani[98151]: {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     "0": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "devices": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "/dev/loop3"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             ],
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_name": "ceph_lv0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_size": "21470642176",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "name": "ceph_lv0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "tags": {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.crush_device_class": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.encrypted": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_id": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.vdo": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.with_tpm": "0"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             },
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "vg_name": "ceph_vg0"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         }
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     ],
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     "1": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "devices": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "/dev/loop4"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             ],
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_name": "ceph_lv1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_size": "21470642176",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "name": "ceph_lv1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "tags": {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.crush_device_class": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.encrypted": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_id": "1",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.vdo": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.with_tpm": "0"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             },
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "vg_name": "ceph_vg1"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         }
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     ],
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     "2": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "devices": [
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "/dev/loop5"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             ],
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_name": "ceph_lv2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_size": "21470642176",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "name": "ceph_lv2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "tags": {
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.cluster_name": "ceph",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.crush_device_class": "",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.encrypted": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.objectstore": "bluestore",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osd_id": "2",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.vdo": "0",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:                 "ceph.with_tpm": "0"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             },
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "type": "block",
Dec 13 03:45:47 compute-0 lucid_villani[98151]:             "vg_name": "ceph_vg2"
Dec 13 03:45:47 compute-0 lucid_villani[98151]:         }
Dec 13 03:45:47 compute-0 lucid_villani[98151]:     ]
Dec 13 03:45:47 compute-0 lucid_villani[98151]: }
Dec 13 03:45:48 compute-0 systemd[1]: libpod-4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849338705' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 03:45:48 compute-0 naughty_shamir[98138]: mimic
Dec 13 03:45:48 compute-0 podman[98179]: 2025-12-13 03:45:48.059158691 +0000 UTC m=+0.024671075 container died 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:48 compute-0 systemd[1]: libpod-cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-7def594ba699c6e7cab85bf5b649b58fc3b067f673a24c963a1530e2f8f0c194-merged.mount: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98179]: 2025-12-13 03:45:48.099707474 +0000 UTC m=+0.065219818 container remove 4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_villani, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:45:48 compute-0 systemd[76438]: Starting Mark boot as successful...
Dec 13 03:45:48 compute-0 systemd[76438]: Finished Mark boot as successful.
Dec 13 03:45:48 compute-0 systemd[1]: libpod-conmon-4505ea8756d8548be837eb6d2664bcbca507095957e5fbb921f492d8bf212e0d.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98196]: 2025-12-13 03:45:48.111080853 +0000 UTC m=+0.025119168 container died cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fb9d900089ee9739cd4e4fd5f70b0daddebae4e1ccb47de8faf12e3e071779b-merged.mount: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98196]: 2025-12-13 03:45:48.146077794 +0000 UTC m=+0.060116089 container remove cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421 (image=quay.io/ceph/ceph:v20, name=naughty_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:45:48 compute-0 systemd[1]: libpod-conmon-cb2a7e7817cc5a1ad80697066ddbc01f6391ce246b38b277e9a42f6755a2d421.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 sudo[98011]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Dec 13 03:45:48 compute-0 sudo[98084]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Dec 13 03:45:48 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 8d330eda-cfc7-4291-90d8-e4c5d196f53a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=44 pruub=10.558342934s) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active pruub 80.634132385s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=44 pruub=10.558342934s) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown pruub 80.634132385s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.3( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.2( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.12( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.11( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.14( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.13( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.16( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.15( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.18( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.17( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1a( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.19( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1c( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1b( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1e( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1d( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.5( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.4( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.7( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.6( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.9( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.8( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.b( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.a( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.c( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.d( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.e( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.10( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.f( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 45 pg[4.1f( empty local-lis/les=22/23 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=23/24 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.10( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.17( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.a( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.b( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=44/45 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.6( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1f( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.d( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.e( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1c( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1b( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.8( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 45 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [2] r=0 lpr=44 pi=[23,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:48 compute-0 sudo[98211]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:45:48 compute-0 sudo[98211]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:48 compute-0 sudo[98211]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:48 compute-0 ceph-mon[75071]: pgmap v96: 73 pgs: 63 unknown, 10 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 511 B/s wr, 1 op/s
Dec 13 03:45:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/849338705' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Dec 13 03:45:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:48 compute-0 ceph-mon[75071]: osdmap e45: 3 total, 3 up, 3 in
Dec 13 03:45:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:48 compute-0 sudo[98236]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:45:48 compute-0 sudo[98236]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:48 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Dec 13 03:45:48 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Dec 13 03:45:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v99: 135 pgs: 32 peering, 94 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Dec 13 03:45:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.548013478 +0000 UTC m=+0.044192559 container create 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:45:48 compute-0 systemd[1]: Started libpod-conmon-1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d.scope.
Dec 13 03:45:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.52979406 +0000 UTC m=+0.025973171 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.626054995 +0000 UTC m=+0.122234096 container init 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.637872916 +0000 UTC m=+0.134051997 container start 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.640795821 +0000 UTC m=+0.136974922 container attach 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:45:48 compute-0 compassionate_hamilton[98288]: 167 167
Dec 13 03:45:48 compute-0 systemd[1]: libpod-1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.642955453 +0000 UTC m=+0.139134554 container died 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:45:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7e1fb05d7af279191fde8d1a324479f89ac70ceca0a5b648c8b06b4a12f8a75-merged.mount: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98273]: 2025-12-13 03:45:48.68020118 +0000 UTC m=+0.176380261 container remove 1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:45:48 compute-0 systemd[1]: libpod-conmon-1839fe34587bfbb4445fd6faf9f006590d5fc412b30047e963826f15fc0caa5d.scope: Deactivated successfully.
Dec 13 03:45:48 compute-0 podman[98313]: 2025-12-13 03:45:48.8282033 +0000 UTC m=+0.044571600 container create f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:48 compute-0 systemd[1]: Started libpod-conmon-f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc.scope.
Dec 13 03:45:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b193413e288b27c00c36581f6c552306451bfaa3dde5ab57568e7642037b3a08/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b193413e288b27c00c36581f6c552306451bfaa3dde5ab57568e7642037b3a08/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b193413e288b27c00c36581f6c552306451bfaa3dde5ab57568e7642037b3a08/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b193413e288b27c00c36581f6c552306451bfaa3dde5ab57568e7642037b3a08/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:48 compute-0 podman[98313]: 2025-12-13 03:45:48.900852441 +0000 UTC m=+0.117220751 container init f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:45:48 compute-0 podman[98313]: 2025-12-13 03:45:48.808618084 +0000 UTC m=+0.024986404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:45:48 compute-0 podman[98313]: 2025-12-13 03:45:48.907353379 +0000 UTC m=+0.123721679 container start f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:48 compute-0 podman[98313]: 2025-12-13 03:45:48.912634182 +0000 UTC m=+0.129002482 container attach f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:45:49 compute-0 sudo[98361]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlolrlclgvvvvlorquovwhmdmjfgumcn ; /usr/bin/python3'
Dec 13 03:45:49 compute-0 sudo[98361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Dec 13 03:45:49 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 0aa8a519-b68f-4c9b-b43f-a9449ab6b89a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:49 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=46 pruub=13.335516930s) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active pruub 77.829544067s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=24/25 n=22 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=46 pruub=11.874160767s) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 35'38 mlcod 35'38 active pruub 82.956787109s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[6.0( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=46 pruub=11.874160767s) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 35'38 mlcod 0'0 unknown pruub 82.956787109s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:49 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 46 pg[7.0( empty local-lis/les=25/26 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=46 pruub=13.335516930s) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown pruub 77.829544067s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1c( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1d( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1f( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1e( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.8( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.6( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.7( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.b( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1b( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.5( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.9( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.4( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.19( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.3( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.2( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.1( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.0( empty local-lis/les=44/46 n=0 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.c( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.e( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.d( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.12( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.13( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.11( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.10( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.f( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.16( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.17( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.18( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.15( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 46 pg[4.14( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=22/22 les/c/f=23/23/0 sis=44) [0] r=0 lpr=44 pi=[22,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:49 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Dec 13 03:45:49 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Dec 13 03:45:49 compute-0 ceph-mon[75071]: osdmap e46: 3 total, 3 up, 3 in
Dec 13 03:45:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:49 compute-0 python3[98370]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.310090746 +0000 UTC m=+0.044766996 container create 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:45:49 compute-0 systemd[1]: Started libpod-conmon-28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e.scope.
Dec 13 03:45:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc89f0042f416ffab61e296cc3b312082c84f94f9b020818a3cc07af1b0d705/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bc89f0042f416ffab61e296cc3b312082c84f94f9b020818a3cc07af1b0d705/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.381261783 +0000 UTC m=+0.115938023 container init 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.290380516 +0000 UTC m=+0.025056786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.388244026 +0000 UTC m=+0.122920276 container start 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.392509949 +0000 UTC m=+0.127186199 container attach 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:49 compute-0 lvm[98472]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:45:49 compute-0 lvm[98471]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:45:49 compute-0 lvm[98471]: VG ceph_vg0 finished
Dec 13 03:45:49 compute-0 lvm[98472]: VG ceph_vg1 finished
Dec 13 03:45:49 compute-0 lvm[98474]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:45:49 compute-0 lvm[98474]: VG ceph_vg2 finished
Dec 13 03:45:49 compute-0 goofy_ramanujan[98330]: {}
Dec 13 03:45:49 compute-0 systemd[1]: libpod-f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc.scope: Deactivated successfully.
Dec 13 03:45:49 compute-0 podman[98313]: 2025-12-13 03:45:49.676972565 +0000 UTC m=+0.893340875 container died f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:45:49 compute-0 systemd[1]: libpod-f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc.scope: Consumed 1.272s CPU time.
Dec 13 03:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b193413e288b27c00c36581f6c552306451bfaa3dde5ab57568e7642037b3a08-merged.mount: Deactivated successfully.
Dec 13 03:45:49 compute-0 podman[98313]: 2025-12-13 03:45:49.727936359 +0000 UTC m=+0.944304659 container remove f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_ramanujan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:45:49 compute-0 systemd[1]: libpod-conmon-f6b3ae428d23c8d467378b809ce0f623ab26ad4592911286c324a4b7b77fa8bc.scope: Deactivated successfully.
Dec 13 03:45:49 compute-0 sudo[98236]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Dec 13 03:45:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4131625392' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 03:45:49 compute-0 kind_engelbart[98415]: 
Dec 13 03:45:49 compute-0 kind_engelbart[98415]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Dec 13 03:45:49 compute-0 systemd[1]: libpod-28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e.scope: Deactivated successfully.
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.910591931 +0000 UTC m=+0.645268221 container died 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:45:49 compute-0 sudo[98490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:45:49 compute-0 sudo[98490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:45:49 compute-0 sudo[98490]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bc89f0042f416ffab61e296cc3b312082c84f94f9b020818a3cc07af1b0d705-merged.mount: Deactivated successfully.
Dec 13 03:45:49 compute-0 podman[98378]: 2025-12-13 03:45:49.968977609 +0000 UTC m=+0.703653869 container remove 28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e (image=quay.io/ceph/ceph:v20, name=kind_engelbart, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:45:49 compute-0 systemd[1]: libpod-conmon-28a3a293420b4a0b9b6a4de273a719b3f51b0702819ab014d09835e11087278e.scope: Deactivated successfully.
Dec 13 03:45:50 compute-0 sudo[98361]: pam_unix(sudo:session): session closed for user root
Dec 13 03:45:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Dec 13 03:45:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Dec 13 03:45:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Dec 13 03:45:50 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev c1300614-19a6-4fe4-abe2-c7df45c1f1b3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 03:45:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:45:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.5( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.9( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.4( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.8( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.7( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.a( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.2( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.e( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.f( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.c( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.d( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=24/25 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=25/26 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.0( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 35'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.16( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.10( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.b( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.17( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.d( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 47 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=24/24 les/c/f=25/25/0 sis=46) [0] r=0 lpr=46 pi=[24,46)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=46/47 n=0 ec=25/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.12( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.14( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.7( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1d( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1e( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.19( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 47 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=25/25 les/c/f=26/26/0 sis=46) [1] r=0 lpr=46 pi=[25,46)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:50 compute-0 ceph-mon[75071]: 2.1d scrub starts
Dec 13 03:45:50 compute-0 ceph-mon[75071]: 2.1d scrub ok
Dec 13 03:45:50 compute-0 ceph-mon[75071]: pgmap v99: 135 pgs: 32 peering, 94 unknown, 9 active+clean; 453 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:50 compute-0 ceph-mon[75071]: 2.1f scrub starts
Dec 13 03:45:50 compute-0 ceph-mon[75071]: 2.1f scrub ok
Dec 13 03:45:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4131625392' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Dec 13 03:45:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:50 compute-0 ceph-mon[75071]: osdmap e47: 3 total, 3 up, 3 in
Dec 13 03:45:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:45:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v102: 181 pgs: 32 peering, 46 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 485 KiB/s rd, 16 KiB/s wr, 950 op/s
Dec 13 03:45:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:50 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Dec 13 03:45:50 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Dec 13 03:45:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Dec 13 03:45:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Dec 13 03:45:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Dec 13 03:45:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Dec 13 03:45:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 4e30e800-84ab-4904-8032-a0095faec758 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 48 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=34/35 n=6 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.836924553s) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 35'5 mlcod 35'5 active pruub 77.342689514s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 48 pg[9.0( v 43'1441 (0'0,43'1441] local-lis/les=36/37 n=242 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48 pruub=12.852599144s) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 43'1440 mlcod 43'1440 active pruub 79.358726501s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 1ea94ac3-8569-496d-aa93-05c8263c5653 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 1ea94ac3-8569-496d-aa93-05c8263c5653 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 8 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev bb38c43d-4439-4a3c-9d5f-b2822e2fffd8 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event bb38c43d-4439-4a3c-9d5f-b2822e2fffd8 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 7 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 405f74ad-d618-4af1-8c85-92fb96ec0e20 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 405f74ad-d618-4af1-8c85-92fb96ec0e20 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 6 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 12d56e5e-b5ca-4bed-8a42-e36c0f90cd94 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 12d56e5e-b5ca-4bed-8a42-e36c0f90cd94 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 5 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev ba777b1f-0b76-49ec-b2b1-6b34193f03bb (PG autoscaler increasing pool 6 PGs from 1 to 16)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event ba777b1f-0b76-49ec-b2b1-6b34193f03bb (PG autoscaler increasing pool 6 PGs from 1 to 16) in 4 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 8d330eda-cfc7-4291-90d8-e4c5d196f53a (PG autoscaler increasing pool 7 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 8d330eda-cfc7-4291-90d8-e4c5d196f53a (PG autoscaler increasing pool 7 PGs from 1 to 32) in 3 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 0aa8a519-b68f-4c9b-b43f-a9449ab6b89a (PG autoscaler increasing pool 8 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 0aa8a519-b68f-4c9b-b43f-a9449ab6b89a (PG autoscaler increasing pool 8 PGs from 1 to 32) in 2 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev c1300614-19a6-4fe4-abe2-c7df45c1f1b3 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event c1300614-19a6-4fe4-abe2-c7df45c1f1b3 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 1 seconds
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 4e30e800-84ab-4904-8032-a0095faec758 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Dec 13 03:45:51 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 4e30e800-84ab-4904-8032-a0095faec758 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 0 seconds
Dec 13 03:45:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 48 pg[8.0( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48 pruub=10.836924553s) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 35'5 mlcod 0'0 unknown pruub 77.342689514s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 48 pg[9.0( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48 pruub=12.852599144s) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 43'1440 mlcod 0'0 unknown pruub 79.358726501s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f3c800 space 0x5637d63dab40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fea200 space 0x5637d60b8540 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f62100 space 0x5637d56c0240 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f63e00 space 0x5637d63db140 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fd9780 space 0x5637d5222240 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601af80 space 0x5637d74bd140 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fea380 space 0x5637d51cbd40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601ad00 space 0x5637d6483440 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fea600 space 0x5637d51d8240 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb680 space 0x5637d6322540 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb480 space 0x5637d6322e40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fea800 space 0x5637d6483a40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f93880 space 0x5637d5875140 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601b000 space 0x5637d51d9440 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30600 space 0x5637d520ab40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f62380 space 0x5637d5821140 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f29380 space 0x5637d5278240 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f62e80 space 0x5637d638cb40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30b80 space 0x5637d5267140 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feac80 space 0x5637d6482b40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30d80 space 0x5637d663d440 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601ac80 space 0x5637d520b440 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feaa00 space 0x5637d6399a40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601ae00 space 0x5637d520a240 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30e80 space 0x5637d6415d40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f3c780 space 0x5637d607a840 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fea180 space 0x5637d51a0840 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601aa80 space 0x5637d5820240 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f29100 space 0x5637d6323740 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f93c80 space 0x5637d5875d40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30980 space 0x5637d5266840 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d6049f00 space 0x5637d5884b40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f63000 space 0x5637d57a8840 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb980 space 0x5637d51a1140 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f28e00 space 0x5637d5821740 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601bd00 space 0x5637d663c240 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f20500 space 0x5637d5280240 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f31300 space 0x5637d60ceb40 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f20680 space 0x5637d6371a40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f20480 space 0x5637d74bc540 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601bf00 space 0x5637d663cb40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601aa00 space 0x5637d60c5140 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f62c80 space 0x5637d6370b40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f30b00 space 0x5637d6411a40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feae80 space 0x5637d6482240 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f62f00 space 0x5637d6336b40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601ab00 space 0x5637d5281140 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb180 space 0x5637d6399140 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f63700 space 0x5637d60b6b40 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fa3680 space 0x5637d525e540 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fead00 space 0x5637d63a9140 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d601a580 space 0x5637d520bd40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb700 space 0x5637d6368840 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fd9f00 space 0x5637d5223140 0x0~9a clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5feb880 space 0x5637d51a1a40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f31b80 space 0x5637d5272840 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fd9a80 space 0x5637d6425140 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5fd9b00 space 0x5637d5281d40 0x0~98 clean)
Dec 13 03:45:51 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5637d55fd8c0) split_cache   moving buffer(0x5637d5f31f80 space 0x5637d5267a40 0x0~6e clean)
Dec 13 03:45:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:51 compute-0 ceph-mon[75071]: 4.1d scrub starts
Dec 13 03:45:51 compute-0 ceph-mon[75071]: 4.1d scrub ok
Dec 13 03:45:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:51 compute-0 ceph-mon[75071]: osdmap e48: 3 total, 3 up, 3 in
Dec 13 03:45:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Dec 13 03:45:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Dec 13 03:45:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Dec 13 03:45:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Dec 13 03:45:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Dec 13 03:45:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Dec 13 03:45:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.14( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.15( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.14( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.15( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.16( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.17( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.16( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.17( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.10( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.11( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.11( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.10( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.12( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.13( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.13( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.12( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.d( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.c( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.f( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.9( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.8( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.b( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.3( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.2( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.e( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.a( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.9( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.8( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.3( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.2( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.7( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.6( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.7( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.4( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.5( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.4( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.5( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1a( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1b( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1a( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.19( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.18( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.19( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.18( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1e( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1f( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1e( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1d( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1c( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1d( v 43'1441 lc 0'0 (0'0,43'1441] local-lis/les=36/37 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1c( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=34/35 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.16( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.14( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.17( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.10( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.12( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.13( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.8( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.a( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.3( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.0( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=36/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 43'1440 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.2( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.0( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=34/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 35'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.a( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.7( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.4( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.5( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.5( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1a( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.19( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=36/36 les/c/f=37/37/0 sis=48) [1] r=0 lpr=48 pi=[36,48)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1e( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 49 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=34/34 les/c/f=35/35/0 sis=48) [1] r=0 lpr=48 pi=[34,48)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:52 compute-0 ceph-mon[75071]: pgmap v102: 181 pgs: 32 peering, 46 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 485 KiB/s rd, 16 KiB/s wr, 950 op/s
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 2.1c scrub starts
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 2.1c scrub ok
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 3.1e scrub starts
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 3.1e scrub ok
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 4.1c scrub starts
Dec 13 03:45:52 compute-0 ceph-mon[75071]: 4.1c scrub ok
Dec 13 03:45:52 compute-0 ceph-mon[75071]: osdmap e49: 3 total, 3 up, 3 in
Dec 13 03:45:52 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 14 completed events
Dec 13 03:45:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:45:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v105: 243 pgs: 32 peering, 108 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 485 KiB/s rd, 16 KiB/s wr, 950 op/s
Dec 13 03:45:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:45:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:52 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Dec 13 03:45:52 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Dec 13 03:45:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:52 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Dec 13 03:45:52 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Dec 13 03:45:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Dec 13 03:45:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Dec 13 03:45:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Dec 13 03:45:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:45:53 compute-0 ceph-mon[75071]: pgmap v105: 243 pgs: 32 peering, 108 unknown, 103 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 485 KiB/s rd, 16 KiB/s wr, 950 op/s
Dec 13 03:45:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:45:53 compute-0 ceph-mon[75071]: 3.1c scrub starts
Dec 13 03:45:53 compute-0 ceph-mon[75071]: 3.1c scrub ok
Dec 13 03:45:53 compute-0 ceph-mon[75071]: 4.1f scrub starts
Dec 13 03:45:53 compute-0 ceph-mon[75071]: 4.1f scrub ok
Dec 13 03:45:53 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 50 pg[10.0( v 42'18 (0'0,42'18] local-lis/les=38/39 n=9 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50 pruub=12.418254852s) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 42'17 mlcod 42'17 active pruub 75.940559387s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:45:53 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 50 pg[10.0( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50 pruub=12.418254852s) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 42'17 mlcod 0'0 unknown pruub 75.940559387s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Dec 13 03:45:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:45:54 compute-0 ceph-mon[75071]: osdmap e50: 3 total, 3 up, 3 in
Dec 13 03:45:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Dec 13 03:45:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.12( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.11( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.10( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1f( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1e( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1d( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1c( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1b( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.19( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1a( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.18( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.7( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.6( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.5( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.4( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.3( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.8( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.f( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.9( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.a( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.b( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.c( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.d( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.e( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.2( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.13( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.14( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.15( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.16( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.17( v 42'18 lc 0'0 (0'0,42'18] local-lis/les=38/39 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.12( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1d( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1b( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1f( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1c( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.18( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.5( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.3( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.0( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=38/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 42'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.9( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.a( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.c( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.d( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.14( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.15( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.e( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 51 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=38/38 les/c/f=39/39/0 sis=50) [2] r=0 lpr=50 pi=[38,50)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:45:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:54 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Dec 13 03:45:54 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Dec 13 03:45:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Dec 13 03:45:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Dec 13 03:45:55 compute-0 ceph-mon[75071]: osdmap e51: 3 total, 3 up, 3 in
Dec 13 03:45:55 compute-0 ceph-mon[75071]: pgmap v108: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:55 compute-0 ceph-mon[75071]: 4.8 scrub starts
Dec 13 03:45:55 compute-0 ceph-mon[75071]: 4.8 scrub ok
Dec 13 03:45:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Dec 13 03:45:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Dec 13 03:45:56 compute-0 ceph-mon[75071]: 2.1b scrub starts
Dec 13 03:45:56 compute-0 ceph-mon[75071]: 2.1b scrub ok
Dec 13 03:45:56 compute-0 ceph-mon[75071]: 3.1d scrub starts
Dec 13 03:45:56 compute-0 ceph-mon[75071]: 3.1d scrub ok
Dec 13 03:45:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v109: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Dec 13 03:45:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Dec 13 03:45:57 compute-0 ceph-mon[75071]: pgmap v109: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:57 compute-0 ceph-mon[75071]: 3.1a scrub starts
Dec 13 03:45:57 compute-0 ceph-mon[75071]: 3.1a scrub ok
Dec 13 03:45:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:45:58 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Dec 13 03:45:58 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Dec 13 03:45:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v110: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Dec 13 03:45:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Dec 13 03:45:58 compute-0 ceph-mon[75071]: 4.6 scrub starts
Dec 13 03:45:58 compute-0 ceph-mon[75071]: 4.6 scrub ok
Dec 13 03:45:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Dec 13 03:45:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Dec 13 03:45:59 compute-0 ceph-mon[75071]: pgmap v110: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:45:59 compute-0 ceph-mon[75071]: 3.1b scrub starts
Dec 13 03:45:59 compute-0 ceph-mon[75071]: 3.1b scrub ok
Dec 13 03:46:00 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Dec 13 03:46:00 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Dec 13 03:46:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v111: 274 pgs: 1 active+clean+scrubbing, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:46:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Dec 13 03:46:00 compute-0 ceph-mon[75071]: 3.19 scrub starts
Dec 13 03:46:00 compute-0 ceph-mon[75071]: 3.19 scrub ok
Dec 13 03:46:00 compute-0 ceph-mon[75071]: 4.7 scrub starts
Dec 13 03:46:00 compute-0 ceph-mon[75071]: 4.7 scrub ok
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Dec 13 03:46:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019189835s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.087944031s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1c( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019145012s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.087944031s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.024388313s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.093215942s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019308090s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088165283s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.024343491s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.093215942s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019083977s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.087989807s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.8( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019058228s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.087989807s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028418541s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097396851s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028404236s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097396851s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019066811s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088211060s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019071579s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088233948s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028243065s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097419739s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1b( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019047737s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088211060s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019056320s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088233948s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028222084s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097419739s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019149780s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088417053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.7( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.018914223s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088165283s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.5( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019126892s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088417053s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019068718s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088417053s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028131485s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097511292s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1a( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019050598s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088417053s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.028111458s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097511292s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019105911s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088523865s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.9( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019090652s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088523865s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019083977s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088623047s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027994156s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097549438s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.4( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.019064903s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088623047s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027958870s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097557068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027974129s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097549438s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027929306s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097557068s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.018976212s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088691711s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.1( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.018963814s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088691711s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.018949509s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.088706970s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.2( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.018931389s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.088706970s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023745537s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093711853s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027627945s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097610474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023698807s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093696594s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.e( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023727417s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093711853s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.d( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023681641s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093696594s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027587891s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097610474s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023665428s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093811035s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027501106s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 active pruub 96.097640991s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.f( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023649216s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093811035s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.027462006s) [1] r=-1 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097640991s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023501396s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093734741s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.10( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023486137s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093734741s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023466110s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093727112s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.11( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023446083s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093727112s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023385048s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093719482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.12( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023358345s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093719482s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023317337s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.093719482s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.024006844s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.094429016s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.13( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023300171s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.093719482s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.14( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023990631s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.094429016s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023529053s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 95.094070435s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[4.18( empty local-lis/les=44/46 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52 pruub=12.023510933s) [2] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 95.094070435s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.10( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.12( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.18( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.14( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.1b( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.8( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.1a( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.9( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.e( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.1( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.a( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.5( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.13( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.11( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[4.1c( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.7( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221916199s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.192359924s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.1( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.d( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.12( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221878052s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.192359924s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221720695s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.192375183s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221699715s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.192375183s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000257492s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.970962524s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.003123283s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.973899841s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1e( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.003111839s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.973899841s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1d( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000199318s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.970962524s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.223518372s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194473267s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.223501205s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194473267s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000649452s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971664429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.19( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000614166s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971664429s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.f( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000177383s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971290588s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.18( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000164032s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971290588s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000364304s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971572876s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.17( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000352859s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971572876s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.4( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[4.2( empty local-lis/les=0/0 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.223129272s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194473267s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000288010s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971656799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.223104477s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194473267s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.16( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=9.000243187s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971656799s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002472878s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974052429s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995359421s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490058899s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995338440s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490058899s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.999560356s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971199036s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.020147324s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.515022278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.11( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002437592s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974052429s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1b( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.020128250s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.515022278s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.15( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.999537468s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971199036s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.1e( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.19( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.18( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002458572s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974136353s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.019773483s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.514770508s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.019763947s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.514770508s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.020858765s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.515968323s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.020847321s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.515968323s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995146751s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490341187s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995132446s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490341187s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.019761086s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.515090942s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.019747734s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.515090942s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.020480156s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.515892029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.020470619s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.515892029s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.994796753s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490425110s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1d( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.994763374s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490425110s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.12( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002437592s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974136353s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.1e( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002375603s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974212646s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.13( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002353668s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974212646s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.019124985s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.514991760s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.18( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.019099236s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.514991760s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.019772530s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.515846252s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.019751549s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.515846252s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.12( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.993912697s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490440369s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1b( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.993888855s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490440369s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002394676s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974304199s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.999322891s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971267700s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.14( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002371788s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974304199s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.018260956s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.515022278s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.018242836s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.515022278s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.16( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002381325s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974487305s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222426414s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194534302s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.15( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002361298s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974487305s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222391129s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194534302s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.13( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.999290466s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971267700s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222496986s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194778442s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222471237s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194778442s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998764992s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971122742s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.11( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018714905s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.515937805s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018740654s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.515975952s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018724442s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.515975952s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018698692s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.515937805s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018587112s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.515953064s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.018571854s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.515953064s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.1d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.11( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998741150s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971122742s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002042770s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.974494934s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.16( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.002029419s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.974494934s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221982956s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194541931s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221961021s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194541931s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.14( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222036362s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194671631s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222019196s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194671631s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998166084s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970970154s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998318672s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971122742s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998151779s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970970154s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.998281479s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971122742s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221727371s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194709778s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221681595s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194709778s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001936913s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975196838s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.10( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997699738s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970954895s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.c( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001921654s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975196838s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221364021s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194740295s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221331596s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194740295s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997668266s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970954895s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001811981s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975303650s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.017436028s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.516151428s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.017416000s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.516151428s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.017286301s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516159058s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.017266273s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516159058s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.17( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.1f( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.7( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001794815s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975303650s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221202850s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194755554s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997297287s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970878601s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.991326332s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490509033s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.015727043s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.514930725s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.015708923s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.514930725s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.18( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.991272926s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490509033s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221181870s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194755554s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.15( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.7( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997279167s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970878601s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001807213s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975517273s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.f( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001789093s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975517273s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001290321s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975006104s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997921944s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.971656799s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.8( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997900963s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.971656799s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220960617s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.194770813s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.9( v 51'19 (0'0,51'19] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220928192s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.194770813s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996827126s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970748901s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.2( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996810913s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970748901s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.1b( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.14( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001498222s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975509644s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.5( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001478195s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975509644s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220720291s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.194786072s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220689774s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.194786072s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996590614s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970748901s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001501083s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975692749s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.3( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996573448s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970748901s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.4( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001486778s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975692749s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.18( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996386528s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970718384s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.4( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996367455s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970718384s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.015348434s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.515045166s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.3( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.015313148s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.515045166s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.016311646s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.516319275s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.016263008s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.516319275s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.997003555s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.497283936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.015922546s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516235352s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.7( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996981621s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.497283936s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.015896797s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516235352s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.11( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.014142990s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.514984131s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.2( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.014125824s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.514984131s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.989592552s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490501404s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.6( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.989569664s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490501404s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.015279770s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.516372681s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.015255928s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.516372681s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.12( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001335144s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.975814819s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.3( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001321793s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975814819s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.1b( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.9( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000528336s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.975006104s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220317841s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.194847107s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.d( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220287323s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.194847107s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.1f( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996119499s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970710754s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.5( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.996107101s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970710754s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001316071s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.976013184s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.2( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001294136s) [0] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.976013184s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995989799s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970748901s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.10( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222955704s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.197738647s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.6( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995954514s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970748901s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.e( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222900391s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.197738647s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001142502s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.976013184s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222941399s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.197845459s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.001121521s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.976013184s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222927094s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.197845459s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995649338s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970672607s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.9( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995637894s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970672607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222504616s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.197601318s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995537758s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970672607s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=50/51 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222484589s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.197601318s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.a( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995524406s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970672607s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222349167s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.197608948s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222336769s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.197608948s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222192764s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.197616577s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995137215s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970573425s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.14( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222165108s) [1] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.197616577s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995111465s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970581055s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222119331s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 active pruub 80.197624207s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.15( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1b( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995095253s) [1] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970573425s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1c( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.995084763s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970581055s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.13( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.15( v 51'19 (0'0,51'19] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.222093582s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 42'18 unknown NOTIFY pruub 80.197624207s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000882149s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.976425171s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.1a( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000867844s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.976425171s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.992059708s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.967689514s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1d( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.992045403s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.967689514s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.11( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221952438s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.197738647s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221933365s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.197738647s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000518799s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.976387024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.19( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000476837s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.976387024s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.994390488s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 79.970344543s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[2.1f( empty local-lis/les=42/43 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.994367599s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 79.970344543s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.4( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000340462s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 active pruub 81.976417542s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[5.18( empty local-lis/les=44/45 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=11.000322342s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=0'0 unknown NOTIFY pruub 81.976417542s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.1e( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.1a( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.15( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.7( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.1d( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.8( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.013274193s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.514793396s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.3( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.221765518s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 active pruub 80.197731018s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.1( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.013243675s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.514793396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.988773346s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490600586s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.5( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.988739967s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490600586s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.014624596s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.516654968s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.014600754s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.516654968s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.13( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.013543129s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516410828s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.013513565s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516410828s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987483978s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.490570068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.3( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987462997s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.490570068s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.1a( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.012570381s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516380310s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.012532234s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516380310s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.007046700s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511207581s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.5( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.007016182s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511207581s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.012182236s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516647339s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.012151718s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516647339s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.19( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.006126404s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511215210s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.c( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.006108284s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511215210s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.988404274s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.493652344s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.8( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.988392830s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.493652344s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.005786896s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511207581s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.e( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.005768776s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511207581s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.7( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.16( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987940788s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.493835449s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.6( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.1( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987859726s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.493835449s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987507820s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.493659973s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.8( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.a( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.987492561s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.493659973s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=50/51 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52 pruub=9.220784187s) [0] r=-1 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 unknown NOTIFY pruub 80.197731018s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.11( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.12( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.1c( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.18( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.7( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.010210991s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.516929626s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.004486084s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511199951s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.010194778s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.516929626s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.f( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.004452705s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511199951s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.010018349s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.516937256s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.010005951s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.516937256s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.c( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.2( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.9( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.2( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.1( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.5( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.5( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.8( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.c( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.5( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.4( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.e( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.3( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.d( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.003758430s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511100769s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.4( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.003741264s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511100769s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[5.2( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009728432s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517135620s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009714127s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517135620s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.003269196s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511169434s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.6( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.003253937s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511169434s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.2( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.1( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009210587s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517272949s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009196281s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517272949s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.985895157s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494079590s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.9( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.985882759s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494079590s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009025574s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517303467s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009015083s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517303467s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009683609s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.518089294s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.009670258s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.518089294s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.002605438s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511054993s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.8( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.002587318s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511054993s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.985510826s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494041443s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.c( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.985487938s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494041443s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.002223969s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.511001587s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.008686066s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517486572s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.9( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.002208710s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.511001587s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.6( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.008669853s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517486572s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.008549690s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.517425537s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.e( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.001059532s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.510971069s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.008515358s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.517425537s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.a( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.001038551s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.510971069s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.5( v 49'1442 (0'0,49'1442] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007639885s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 43'1441 active pruub 91.517669678s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.5( v 49'1442 (0'0,49'1442] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007615089s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 43'1441 unknown NOTIFY pruub 91.517669678s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.8( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007595062s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517730713s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.000730515s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.510902405s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1b( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007572174s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517730713s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.15( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=13.000719070s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.510902405s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983810425s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494094849s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007450104s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517738342s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.11( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983787537s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494094849s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007411957s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517738342s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007314682s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.517723083s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007303238s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.517723083s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.a( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.1b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.6( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.1c( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.1d( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983625412s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494155884s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.15( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.e( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.17( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.12( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983595848s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494155884s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007292747s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517959595s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007277489s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517959595s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983397484s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494087219s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007379532s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.518074036s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007365227s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.518074036s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.f( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983376503s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494087219s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983255386s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494087219s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.e( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983236313s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494087219s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007224083s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.518135071s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.11( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007243156s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.518157959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1f( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007205963s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.518135071s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.007203102s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.518157959s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.999672890s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.510734558s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983277321s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494338989s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.11( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.999653816s) [2] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.510734558s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.15( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.e( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.3( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.15( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983263969s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494338989s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[7.11( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[2.1f( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983279228s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494415283s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.16( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.983262062s) [2] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494415283s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006961823s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.518165588s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006938934s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.518165588s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[3.16( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006893158s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 active pruub 91.518280029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[10.16( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.4( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006875992s) [0] r=-1 lpr=52 pi=[48,52)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 91.518280029s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.005950928s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.517593384s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=48/49 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.005928040s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.517593384s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.982545853s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 active pruub 85.494430542s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[3.17( empty local-lis/les=42/43 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52 pruub=8.982522964s) [0] r=-1 lpr=52 pi=[42,52)/1 crt=0'0 unknown NOTIFY pruub 85.494430542s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006501198s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 active pruub 91.518417358s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=48/49 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52 pruub=15.006483078s) [2] r=-1 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 unknown NOTIFY pruub 91.518417358s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.f( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 52 pg[8.1c( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.7( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.997146606s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 active pruub 89.510704041s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[7.13( empty local-lis/les=46/47 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52 pruub=12.997119904s) [0] r=-1 lpr=52 pi=[46,52)/1 crt=0'0 unknown NOTIFY pruub 89.510704041s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.b( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.3( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.4( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.1( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.a( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.9( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.5( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.6( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.6( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.9( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.1( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.9( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.9( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.2( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.a( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.13( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[10.14( empty local-lis/les=0/0 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[2.1b( empty local-lis/les=0/0 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.1a( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.19( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.4( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 52 pg[5.18( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.c( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.f( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.b( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.9( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.6( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.1a( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.12( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.18( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.f( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.1f( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.15( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[8.1d( empty local-lis/les=0/0 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[3.17( empty local-lis/les=0/0 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:01 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 52 pg[7.13( empty local-lis/les=0/0 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Dec 13 03:46:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Dec 13 03:46:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Dec 13 03:46:02 compute-0 ceph-mon[75071]: pgmap v111: 274 pgs: 1 active+clean+scrubbing, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:46:02 compute-0 ceph-mon[75071]: osdmap e52: 3 total, 3 up, 3 in
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.16( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=-1 lpr=53 pi=[48,53)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.15( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.11( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.2( v 35'6 (0'0,35'6] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.1( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.15( v 51'19 lc 39'3 (0'0,51'19] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.e( v 51'19 lc 39'4 (0'0,51'19] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.10( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.d( v 51'19 lc 39'5 (0'0,51'19] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.17( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.9( v 51'19 lc 39'8 (0'0,51'19] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.b( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.8( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.5( v 49'1442 (0'0,49'1442] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.5( v 49'1442 (0'0,49'1442] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.2( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.d( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.4( v 35'6 (0'0,35'6] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.1b( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.1c( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [2] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[8.12( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [2] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [2] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [2] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.13( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.14( v 51'19 lc 39'7 (0'0,51'19] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.12( v 51'19 lc 42'17 (0'0,51'19] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=51'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.11( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.10( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.f( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.1( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.5( v 36'39 lc 35'11 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.b( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.7( v 36'39 lc 35'20 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.19( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.1a( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[10.6( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [1] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.4( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.6( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=52/53 n=1 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.9( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.7( v 42'18 (0'0,42'18] local-lis/les=52/53 n=1 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.e( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.c( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[10.1e( v 42'18 (0'0,42'18] local-lis/les=52/53 n=0 ec=50/38 lis/c=50/50 les/c/f=51/51/0 sis=52) [0] r=0 lpr=52 pi=[50,52)/1 crt=42'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.1d( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [0] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.1f( v 35'6 lc 0'0 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.18( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.1a( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[8.14( v 35'6 (0'0,35'6] local-lis/les=52/53 n=0 ec=48/34 lis/c=48/48 les/c/f=49/49/0 sis=52) [0] r=0 lpr=52 pi=[48,52)/1 crt=35'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=52/53 n=0 ec=42/20 lis/c=42/42 les/c/f=43/43/0 sis=52) [0] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=52/53 n=0 ec=46/25 lis/c=46/46 les/c/f=47/47/0 sis=52) [0] r=0 lpr=52 pi=[46,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event dab29819-9889-41b8-9ccd-0e48e340257c (Global Recovery Event) in 25 seconds
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=52/53 n=0 ec=42/18 lis/c=42/42 les/c/f=43/43/0 sis=52) [1] r=0 lpr=52 pi=[42,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=52) [1] r=0 lpr=52 pi=[46,52)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=52/53 n=0 ec=44/22 lis/c=44/44 les/c/f=46/46/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=52/53 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v114: 274 pgs: 1 active+clean+scrubbing, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 13 03:46:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 03:46:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Dec 13 03:46:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 03:46:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Dec 13 03:46:03 compute-0 ceph-mon[75071]: osdmap e53: 3 total, 3 up, 3 in
Dec 13 03:46:03 compute-0 ceph-mon[75071]: pgmap v114: 274 pgs: 1 active+clean+scrubbing, 273 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 03:46:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Dec 13 03:46:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 03:46:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 03:46:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Dec 13 03:46:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.777422905s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 active pruub 96.097564697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.777386665s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097564697s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.776954651s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 active pruub 96.097610474s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.6( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.776910782s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097610474s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.776591301s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 active pruub 96.097557068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.778010368s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 active pruub 96.098991394s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.e( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.777983665s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.098991394s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 54 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54 pruub=10.776543617s) [1] r=-1 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 96.097557068s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[6.6( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[6.e( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[6.2( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=10}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.5( v 49'1442 (0'0,49'1442] local-lis/les=53/54 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=49'1442 lcod 43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:03 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 54 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=53) [0]/[1] async=[0] r=0 lpr=53 pi=[48,53)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Dec 13 03:46:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 03:46:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Dec 13 03:46:04 compute-0 ceph-mon[75071]: osdmap e54: 3 total, 3 up, 3 in
Dec 13 03:46:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Dec 13 03:46:04 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031196594s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739189148s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031107903s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739189148s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.030695915s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739013672s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031394958s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739761353s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.030585289s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739013672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031311989s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739761353s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031197548s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739700317s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.031131744s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739700317s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.030672073s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739562988s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.030622482s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739562988s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029796600s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739013672s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029729843s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739013672s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029274940s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.738830566s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029200554s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.738830566s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029256821s) [0] async=[0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 active pruub 94.739105225s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55 pruub=15.029197693s) [0] r=-1 lpr=55 pi=[48,55)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739105225s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[6.6( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=54/55 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[6.2( v 36'39 (0'0,36'39] local-lis/les=54/55 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=54/55 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:04 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 55 pg[6.e( v 36'39 lc 35'19 (0'0,36'39] local-lis/les=54/55 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=54) [1] r=0 lpr=54 pi=[46,54)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 55 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v117: 274 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 254 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/281 objects misplaced (41.637%); 610 B/s, 2 keys/s, 5 objects/s recovering
Dec 13 03:46:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Dec 13 03:46:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Dec 13 03:46:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.025600433s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.739868164s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.5( v 54'1444 (0'0,54'1444] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=49'1442 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.5( v 54'1444 (0'0,54'1444] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=49'1442 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.025524139s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739868164s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.024972916s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.739921570s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.024902344s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739921570s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.024792671s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.740089417s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-mon[75071]: osdmap e55: 3 total, 3 up, 3 in
Dec 13 03:46:05 compute-0 ceph-mon[75071]: pgmap v117: 274 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 254 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/281 objects misplaced (41.637%); 610 B/s, 2 keys/s, 5 objects/s recovering
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.024648666s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.740089417s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.023960114s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.739807129s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.023906708s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739807129s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.022077560s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.738830566s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.023015022s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.739723206s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.022020340s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.738830566s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.021965027s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 active pruub 94.738876343s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.021936417s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.738876343s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.022861481s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 94.739723206s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.5( v 54'1444 (0'0,54'1444] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.022700310s) [0] async=[0] r=-1 lpr=56 pi=[48,56)/1 crt=49'1442 lcod 54'1443 active pruub 94.739906311s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:05 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 56 pg[9.5( v 54'1444 (0'0,54'1444] local-lis/les=53/54 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56 pruub=14.022546768s) [0] r=-1 lpr=56 pi=[48,56)/1 crt=49'1442 lcod 54'1443 unknown NOTIFY pruub 94.739906311s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.d( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.1( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.9( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.1b( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 56 pg[9.3( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=55) [0] r=0 lpr=55 pi=[48,55)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:05 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Dec 13 03:46:05 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Dec 13 03:46:06 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Dec 13 03:46:06 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Dec 13 03:46:06 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Dec 13 03:46:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Dec 13 03:46:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Dec 13 03:46:06 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Dec 13 03:46:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Dec 13 03:46:06 compute-0 ceph-mon[75071]: osdmap e56: 3 total, 3 up, 3 in
Dec 13 03:46:06 compute-0 ceph-mon[75071]: 8.17 scrub starts
Dec 13 03:46:06 compute-0 ceph-mon[75071]: 8.17 scrub ok
Dec 13 03:46:06 compute-0 ceph-mon[75071]: osdmap e57: 3 total, 3 up, 3 in
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.5( v 54'1444 (0'0,54'1444] local-lis/les=56/57 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=54'1444 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.b( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.13( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.11( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.1d( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.19( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 57 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=53/48 les/c/f=54/49/0 sis=56) [0] r=0 lpr=56 pi=[48,56)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v120: 274 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 254 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/281 objects misplaced (41.637%); 610 B/s, 2 keys/s, 5 objects/s recovering
Dec 13 03:46:07 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 15 completed events
Dec 13 03:46:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:46:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:07 compute-0 ceph-mon[75071]: 4.1e scrub starts
Dec 13 03:46:07 compute-0 ceph-mon[75071]: 4.1e scrub ok
Dec 13 03:46:07 compute-0 ceph-mon[75071]: 2.1a scrub starts
Dec 13 03:46:07 compute-0 ceph-mon[75071]: 2.1a scrub ok
Dec 13 03:46:07 compute-0 ceph-mon[75071]: pgmap v120: 274 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 254 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 117/281 objects misplaced (41.637%); 610 B/s, 2 keys/s, 5 objects/s recovering
Dec 13 03:46:07 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e57 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v121: 274 pgs: 4 peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s, 1 keys/s, 27 objects/s recovering
Dec 13 03:46:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Dec 13 03:46:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Dec 13 03:46:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.b scrub starts
Dec 13 03:46:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.b scrub ok
Dec 13 03:46:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Dec 13 03:46:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Dec 13 03:46:09 compute-0 ceph-mon[75071]: pgmap v121: 274 pgs: 4 peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s, 1 keys/s, 27 objects/s recovering
Dec 13 03:46:09 compute-0 ceph-mon[75071]: 8.16 scrub starts
Dec 13 03:46:09 compute-0 ceph-mon[75071]: 8.16 scrub ok
Dec 13 03:46:10 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Dec 13 03:46:10 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Dec 13 03:46:10 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Dec 13 03:46:10 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Dec 13 03:46:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v122: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 700 B/s, 19 objects/s recovering
Dec 13 03:46:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 13 03:46:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 03:46:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Dec 13 03:46:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 03:46:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Dec 13 03:46:10 compute-0 ceph-mon[75071]: 4.b scrub starts
Dec 13 03:46:10 compute-0 ceph-mon[75071]: 4.b scrub ok
Dec 13 03:46:10 compute-0 ceph-mon[75071]: 7.1e scrub starts
Dec 13 03:46:10 compute-0 ceph-mon[75071]: 7.1e scrub ok
Dec 13 03:46:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 03:46:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Dec 13 03:46:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 03:46:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 03:46:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Dec 13 03:46:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.542201042s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 active pruub 101.671554565s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.542240143s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 active pruub 101.671699524s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.542169571s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 unknown NOTIFY pruub 101.671699524s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.3( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.541931152s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 unknown NOTIFY pruub 101.671554565s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.588303566s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 active pruub 101.718109131s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.542022705s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 active pruub 101.671813965s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.588283539s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 unknown NOTIFY pruub 101.718109131s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:10 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 58 pg[6.7( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58 pruub=15.541945457s) [0] r=-1 lpr=58 pi=[52,58)/1 crt=36'39 unknown NOTIFY pruub 101.671813965s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:10 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 58 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:10 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 58 pg[6.3( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:10 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 58 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:10 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 58 pg[6.7( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Dec 13 03:46:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Dec 13 03:46:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Dec 13 03:46:11 compute-0 ceph-mon[75071]: 4.19 scrub starts
Dec 13 03:46:11 compute-0 ceph-mon[75071]: 4.19 scrub ok
Dec 13 03:46:11 compute-0 ceph-mon[75071]: 10.1f scrub starts
Dec 13 03:46:11 compute-0 ceph-mon[75071]: 10.1f scrub ok
Dec 13 03:46:11 compute-0 ceph-mon[75071]: pgmap v122: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 700 B/s, 19 objects/s recovering
Dec 13 03:46:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 03:46:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Dec 13 03:46:11 compute-0 ceph-mon[75071]: osdmap e58: 3 total, 3 up, 3 in
Dec 13 03:46:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Dec 13 03:46:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Dec 13 03:46:11 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 59 pg[6.7( v 36'39 lc 35'20 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:11 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 59 pg[6.3( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=58/59 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=36'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:11 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 59 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:11 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 59 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=58) [0] r=0 lpr=58 pi=[52,58)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:12 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Dec 13 03:46:12 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Dec 13 03:46:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v125: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 701 B/s, 19 objects/s recovering
Dec 13 03:46:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 13 03:46:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 03:46:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Dec 13 03:46:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 03:46:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Dec 13 03:46:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Dec 13 03:46:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 03:46:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 03:46:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Dec 13 03:46:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Dec 13 03:46:12 compute-0 ceph-mon[75071]: 4.3 scrub starts
Dec 13 03:46:12 compute-0 ceph-mon[75071]: 4.3 scrub ok
Dec 13 03:46:12 compute-0 ceph-mon[75071]: osdmap e59: 3 total, 3 up, 3 in
Dec 13 03:46:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 03:46:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Dec 13 03:46:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:13 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Dec 13 03:46:13 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Dec 13 03:46:13 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Dec 13 03:46:13 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Dec 13 03:46:13 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 60 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.666811943s) [1] r=-1 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 active pruub 104.097671509s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:13 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 60 pg[6.c( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.666774750s) [1] r=-1 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 104.097671509s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:13 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 60 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.666490555s) [1] r=-1 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 active pruub 104.097541809s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:13 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 60 pg[6.4( v 36'39 (0'0,36'39] local-lis/les=46/47 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60 pruub=8.666425705s) [1] r=-1 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 104.097541809s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:13 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:13 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[46,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Dec 13 03:46:13 compute-0 ceph-mon[75071]: 4.0 scrub starts
Dec 13 03:46:13 compute-0 ceph-mon[75071]: 4.0 scrub ok
Dec 13 03:46:13 compute-0 ceph-mon[75071]: 5.10 scrub starts
Dec 13 03:46:13 compute-0 ceph-mon[75071]: pgmap v125: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 701 B/s, 19 objects/s recovering
Dec 13 03:46:13 compute-0 ceph-mon[75071]: 5.10 scrub ok
Dec 13 03:46:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 03:46:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Dec 13 03:46:13 compute-0 ceph-mon[75071]: osdmap e60: 3 total, 3 up, 3 in
Dec 13 03:46:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Dec 13 03:46:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Dec 13 03:46:13 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 61 pg[6.4( v 36'39 lc 35'15 (0'0,36'39] local-lis/les=60/61 n=2 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:13 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 61 pg[6.c( v 36'39 lc 35'17 (0'0,36'39] local-lis/les=60/61 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[46,60)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.c scrub starts
Dec 13 03:46:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.c scrub ok
Dec 13 03:46:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v128: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 13 03:46:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Dec 13 03:46:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Dec 13 03:46:14 compute-0 ceph-mon[75071]: 7.19 scrub starts
Dec 13 03:46:14 compute-0 ceph-mon[75071]: 7.19 scrub ok
Dec 13 03:46:14 compute-0 ceph-mon[75071]: 5.1f scrub starts
Dec 13 03:46:14 compute-0 ceph-mon[75071]: 5.1f scrub ok
Dec 13 03:46:14 compute-0 ceph-mon[75071]: osdmap e61: 3 total, 3 up, 3 in
Dec 13 03:46:15 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Dec 13 03:46:15 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Dec 13 03:46:15 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Dec 13 03:46:15 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Dec 13 03:46:15 compute-0 ceph-mon[75071]: 4.c scrub starts
Dec 13 03:46:15 compute-0 ceph-mon[75071]: 4.c scrub ok
Dec 13 03:46:15 compute-0 ceph-mon[75071]: pgmap v128: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 161 B/s, 2 keys/s, 1 objects/s recovering
Dec 13 03:46:15 compute-0 ceph-mon[75071]: 10.1d scrub starts
Dec 13 03:46:15 compute-0 ceph-mon[75071]: 10.1d scrub ok
Dec 13 03:46:16 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Dec 13 03:46:16 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Dec 13 03:46:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v129: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:16 compute-0 ceph-mon[75071]: 7.1d scrub starts
Dec 13 03:46:16 compute-0 ceph-mon[75071]: 7.1d scrub ok
Dec 13 03:46:16 compute-0 ceph-mon[75071]: 2.14 scrub starts
Dec 13 03:46:16 compute-0 ceph-mon[75071]: 2.14 scrub ok
Dec 13 03:46:17 compute-0 ceph-mon[75071]: 4.15 scrub starts
Dec 13 03:46:17 compute-0 ceph-mon[75071]: 4.15 scrub ok
Dec 13 03:46:17 compute-0 ceph-mon[75071]: pgmap v129: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 109 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v130: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Dec 13 03:46:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Dec 13 03:46:19 compute-0 ceph-mon[75071]: pgmap v130: 274 pgs: 2 peering, 272 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 93 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:19 compute-0 ceph-mon[75071]: 8.13 scrub starts
Dec 13 03:46:19 compute-0 ceph-mon[75071]: 8.13 scrub ok
Dec 13 03:46:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v131: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 13 03:46:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 03:46:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Dec 13 03:46:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 03:46:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Dec 13 03:46:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 03:46:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Dec 13 03:46:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 03:46:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 03:46:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Dec 13 03:46:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Dec 13 03:46:20 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 62 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=13.730964661s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=36'39 active pruub 109.671997070s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:20 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 62 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=13.730881691s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=36'39 unknown NOTIFY pruub 109.671997070s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:20 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 62 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=13.730392456s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=36'39 active pruub 109.672103882s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:20 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 62 pg[6.5( v 36'39 (0'0,36'39] local-lis/les=52/53 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62 pruub=13.730179787s) [0] r=-1 lpr=62 pi=[52,62)/1 crt=36'39 unknown NOTIFY pruub 109.672103882s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:20 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:20 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Dec 13 03:46:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Dec 13 03:46:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Dec 13 03:46:21 compute-0 ceph-mon[75071]: pgmap v131: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 283 B/s, 1 keys/s, 1 objects/s recovering
Dec 13 03:46:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 03:46:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Dec 13 03:46:21 compute-0 ceph-mon[75071]: osdmap e62: 3 total, 3 up, 3 in
Dec 13 03:46:21 compute-0 ceph-mon[75071]: 4.16 scrub starts
Dec 13 03:46:21 compute-0 ceph-mon[75071]: 4.16 scrub ok
Dec 13 03:46:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Dec 13 03:46:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Dec 13 03:46:21 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 63 pg[6.5( v 36'39 lc 35'11 (0'0,36'39] local-lis/les=62/63 n=2 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:21 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 63 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=62/63 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=62) [0] r=0 lpr=62 pi=[52,62)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:22 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Dec 13 03:46:22 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Dec 13 03:46:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v134: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 0 objects/s recovering
Dec 13 03:46:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 13 03:46:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 03:46:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Dec 13 03:46:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 03:46:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Dec 13 03:46:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 03:46:22 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 03:46:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Dec 13 03:46:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Dec 13 03:46:22 compute-0 ceph-mon[75071]: osdmap e63: 3 total, 3 up, 3 in
Dec 13 03:46:22 compute-0 ceph-mon[75071]: 4.17 scrub starts
Dec 13 03:46:22 compute-0 ceph-mon[75071]: 4.17 scrub ok
Dec 13 03:46:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 03:46:22 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Dec 13 03:46:23 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Dec 13 03:46:23 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Dec 13 03:46:23 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Dec 13 03:46:23 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Dec 13 03:46:23 compute-0 ceph-mon[75071]: pgmap v134: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 202 B/s, 0 objects/s recovering
Dec 13 03:46:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 03:46:23 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Dec 13 03:46:23 compute-0 ceph-mon[75071]: osdmap e64: 3 total, 3 up, 3 in
Dec 13 03:46:23 compute-0 ceph-mon[75071]: 10.16 scrub starts
Dec 13 03:46:23 compute-0 ceph-mon[75071]: 10.16 scrub ok
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494946480s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 active pruub 107.516220093s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494763374s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 107.516220093s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.495175362s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 active pruub 107.517112732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.495141029s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 107.517112732s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:23 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494986534s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 active pruub 107.517532349s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494823456s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 107.517532349s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:23 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494819641s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 active pruub 107.518280029s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:23 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 64 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64 pruub=8.494789124s) [2] r=-1 lpr=64 pi=[48,64)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 107.518280029s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:23 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:23 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=64) [2] r=0 lpr=64 pi=[48,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Dec 13 03:46:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Dec 13 03:46:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v136: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 1 objects/s recovering
Dec 13 03:46:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Dec 13 03:46:24 compute-0 ceph-mon[75071]: 10.1b scrub starts
Dec 13 03:46:24 compute-0 ceph-mon[75071]: 10.1b scrub ok
Dec 13 03:46:24 compute-0 ceph-mon[75071]: 10.1 scrub starts
Dec 13 03:46:24 compute-0 ceph-mon[75071]: 10.1 scrub ok
Dec 13 03:46:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Dec 13 03:46:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[48,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:24 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 65 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Dec 13 03:46:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Dec 13 03:46:25 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Dec 13 03:46:25 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Dec 13 03:46:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Dec 13 03:46:25 compute-0 ceph-mon[75071]: pgmap v136: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 296 B/s, 1 objects/s recovering
Dec 13 03:46:25 compute-0 ceph-mon[75071]: osdmap e65: 3 total, 3 up, 3 in
Dec 13 03:46:25 compute-0 ceph-mon[75071]: 5.1e scrub starts
Dec 13 03:46:25 compute-0 ceph-mon[75071]: 5.1e scrub ok
Dec 13 03:46:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Dec 13 03:46:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Dec 13 03:46:25 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 66 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:25 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 66 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:25 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 66 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:25 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 66 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[48,65)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v139: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Dec 13 03:46:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Dec 13 03:46:26 compute-0 ceph-mon[75071]: 10.1c scrub starts
Dec 13 03:46:26 compute-0 ceph-mon[75071]: 10.1c scrub ok
Dec 13 03:46:26 compute-0 ceph-mon[75071]: osdmap e66: 3 total, 3 up, 3 in
Dec 13 03:46:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Dec 13 03:46:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 67 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.006425858s) [2] async=[2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 active pruub 117.038230896s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.006225586s) [2] async=[2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 active pruub 117.038093567s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.006352425s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 117.038230896s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.006146431s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 117.038093567s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.005911827s) [2] async=[2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 active pruub 117.038131714s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=14.990968704s) [2] async=[2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 active pruub 117.023330688s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=14.990902901s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 117.023330688s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 67 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=65/66 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67 pruub=15.005547523s) [2] r=-1 lpr=67 pi=[48,67)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 117.038131714s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Dec 13 03:46:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Dec 13 03:46:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e67 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Dec 13 03:46:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Dec 13 03:46:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Dec 13 03:46:27 compute-0 ceph-mon[75071]: pgmap v139: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 33 B/s, 0 objects/s recovering
Dec 13 03:46:27 compute-0 ceph-mon[75071]: osdmap e67: 3 total, 3 up, 3 in
Dec 13 03:46:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 68 pg[9.e( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 68 pg[9.1e( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 68 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 68 pg[9.6( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=8 ec=48/36 lis/c=65/48 les/c/f=66/49/0 sis=67) [2] r=0 lpr=67 pi=[48,67)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:28 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Dec 13 03:46:28 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Dec 13 03:46:28 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Dec 13 03:46:28 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Dec 13 03:46:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v142: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:28 compute-0 ceph-mon[75071]: 2.12 scrub starts
Dec 13 03:46:28 compute-0 ceph-mon[75071]: 2.12 scrub ok
Dec 13 03:46:28 compute-0 ceph-mon[75071]: osdmap e68: 3 total, 3 up, 3 in
Dec 13 03:46:29 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Dec 13 03:46:29 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Dec 13 03:46:29 compute-0 ceph-mon[75071]: 7.7 scrub starts
Dec 13 03:46:29 compute-0 ceph-mon[75071]: 7.7 scrub ok
Dec 13 03:46:29 compute-0 ceph-mon[75071]: 10.18 scrub starts
Dec 13 03:46:29 compute-0 ceph-mon[75071]: 10.18 scrub ok
Dec 13 03:46:29 compute-0 ceph-mon[75071]: pgmap v142: 274 pgs: 4 unknown, 270 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Dec 13 03:46:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Dec 13 03:46:30 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.a scrub starts
Dec 13 03:46:30 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.a scrub ok
Dec 13 03:46:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v143: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.4 KiB/s wr, 82 op/s; 143 B/s, 4 objects/s recovering
Dec 13 03:46:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 13 03:46:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 03:46:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Dec 13 03:46:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 03:46:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Dec 13 03:46:30 compute-0 ceph-mon[75071]: 8.8 scrub starts
Dec 13 03:46:30 compute-0 ceph-mon[75071]: 8.8 scrub ok
Dec 13 03:46:30 compute-0 ceph-mon[75071]: 2.19 scrub starts
Dec 13 03:46:30 compute-0 ceph-mon[75071]: 2.19 scrub ok
Dec 13 03:46:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 03:46:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Dec 13 03:46:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 03:46:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 03:46:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Dec 13 03:46:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Dec 13 03:46:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Dec 13 03:46:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Dec 13 03:46:31 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Dec 13 03:46:31 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Dec 13 03:46:31 compute-0 ceph-mon[75071]: 8.a scrub starts
Dec 13 03:46:31 compute-0 ceph-mon[75071]: 8.a scrub ok
Dec 13 03:46:31 compute-0 ceph-mon[75071]: pgmap v143: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 42 KiB/s rd, 1.4 KiB/s wr, 82 op/s; 143 B/s, 4 objects/s recovering
Dec 13 03:46:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 03:46:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Dec 13 03:46:31 compute-0 ceph-mon[75071]: osdmap e69: 3 total, 3 up, 3 in
Dec 13 03:46:32 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Dec 13 03:46:32 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Dec 13 03:46:32 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Dec 13 03:46:32 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Dec 13 03:46:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v145: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.3 KiB/s wr, 79 op/s; 138 B/s, 4 objects/s recovering
Dec 13 03:46:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 13 03:46:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 03:46:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Dec 13 03:46:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=13.777519226s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=43'1441 active pruub 128.309600830s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=69 pruub=13.777482033s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=43'1441 unknown NOTIFY pruub 128.309600830s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773877144s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 active pruub 127.306816101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773738861s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 unknown NOTIFY pruub 127.306816101s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773519516s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 active pruub 127.306800842s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773502350s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 unknown NOTIFY pruub 127.306800842s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773299217s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 active pruub 127.306816101s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 69 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69 pruub=12.773227692s) [2] r=-1 lpr=69 pi=[55,69)/1 crt=43'1441 unknown NOTIFY pruub 127.306816101s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 69 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 69 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 69 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 69 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=69) [2] r=0 lpr=69 pi=[55,69)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Dec 13 03:46:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 03:46:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 03:46:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Dec 13 03:46:32 compute-0 sudo[98552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oidusdizbevudrwwqdokgntpqzntdqpb ; /usr/bin/python3'
Dec 13 03:46:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[0] r=0 lpr=70 pi=[56,70)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=70 pruub=13.534542084s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=36'39 lcod 0'0 active pruub 128.098815918s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=8 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[0] r=0 lpr=70 pi=[56,70)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=46/47 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=70 pruub=13.534468651s) [2] r=-1 lpr=70 pi=[46,70)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 128.098815918s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=8 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 70 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=55/56 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 sudo[98552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[6.8( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.17( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[56,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.7( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[56,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:32 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] r=-1 lpr=70 pi=[55,70)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 8.3 scrub starts
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 8.3 scrub ok
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 5.8 scrub starts
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 5.8 scrub ok
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 2.18 scrub starts
Dec 13 03:46:32 compute-0 ceph-mon[75071]: 2.18 scrub ok
Dec 13 03:46:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 03:46:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Dec 13 03:46:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 03:46:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Dec 13 03:46:32 compute-0 ceph-mon[75071]: osdmap e70: 3 total, 3 up, 3 in
Dec 13 03:46:32 compute-0 python3[98554]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:46:32 compute-0 podman[98555]: 2025-12-13 03:46:32.86474204 +0000 UTC m=+0.044836756 container create 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:32 compute-0 systemd[1]: Started libpod-conmon-983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f.scope.
Dec 13 03:46:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d1f88aff1e2c41bf4dfe67cfa11b77e9fd66316f52467f42b543ba927d07c7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64d1f88aff1e2c41bf4dfe67cfa11b77e9fd66316f52467f42b543ba927d07c7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:32 compute-0 podman[98555]: 2025-12-13 03:46:32.939721845 +0000 UTC m=+0.119816591 container init 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:32 compute-0 podman[98555]: 2025-12-13 03:46:32.847002886 +0000 UTC m=+0.027097622 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:46:32 compute-0 podman[98555]: 2025-12-13 03:46:32.947534936 +0000 UTC m=+0.127629652 container start 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:32 compute-0 podman[98555]: 2025-12-13 03:46:32.950838033 +0000 UTC m=+0.130932749 container attach 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 70 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70 pruub=15.194111824s) [2] r=-1 lpr=70 pi=[48,70)/1 crt=43'1441 lcod 0'0 active pruub 123.517707825s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 70 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70 pruub=15.194325447s) [2] r=-1 lpr=70 pi=[48,70)/1 crt=43'1441 lcod 0'0 active pruub 123.518325806s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 70 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70 pruub=15.194289207s) [2] r=-1 lpr=70 pi=[48,70)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 123.518325806s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 70 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70 pruub=15.193553925s) [2] r=-1 lpr=70 pi=[48,70)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 123.517707825s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70) [2] r=0 lpr=70 pi=[48,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 70 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=70) [2] r=0 lpr=70 pi=[48,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:33 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Dec 13 03:46:33 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Dec 13 03:46:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Dec 13 03:46:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Dec 13 03:46:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[48,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[48,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[48,71)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=-1 lpr=71 pi=[48,71)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 71 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 71 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 71 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:33 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 71 pg[6.8( v 36'39 (0'0,36'39] local-lis/les=70/71 n=1 ec=46/24 lis/c=46/46 les/c/f=47/47/0 sis=70) [2] r=0 lpr=70 pi=[46,70)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:33 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 71 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:33 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 71 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:33 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 71 pg[9.7( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=8 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[56,70)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:33 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 71 pg[9.f( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=8 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:33 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 71 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=55/55 les/c/f=56/56/0 sis=70) [2]/[0] async=[2] r=0 lpr=70 pi=[55,70)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:33 compute-0 ceph-mon[75071]: 8.1 scrub starts
Dec 13 03:46:33 compute-0 ceph-mon[75071]: 8.1 scrub ok
Dec 13 03:46:33 compute-0 ceph-mon[75071]: pgmap v145: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.3 KiB/s wr, 79 op/s; 138 B/s, 4 objects/s recovering
Dec 13 03:46:33 compute-0 ceph-mon[75071]: osdmap e71: 3 total, 3 up, 3 in
Dec 13 03:46:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Dec 13 03:46:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Dec 13 03:46:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v148: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1023 B/s wr, 86 op/s; 138 B/s, 4 objects/s recovering
Dec 13 03:46:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Dec 13 03:46:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Dec 13 03:46:34 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.f( v 71'1442 (0'0,71'1442] local-lis/les=0/0 n=8 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.f( v 71'1442 (0'0,71'1442] local-lis/les=0/0 n=8 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.7( v 71'1442 (0'0,71'1442] local-lis/les=0/0 n=8 ec=48/36 lis/c=70/56 les/c/f=71/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.7( v 71'1442 (0'0,71'1442] local-lis/les=0/0 n=8 ec=48/36 lis/c=70/56 les/c/f=71/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.002630234s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 active pruub 131.585922241s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.7( v 71'1442 (0'0,71'1442] local-lis/les=70/71 n=8 ec=48/36 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=15.002400398s) [2] async=[2] r=-1 lpr=72 pi=[56,72)/1 crt=43'1441 lcod 43'1441 active pruub 131.585754395s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.002563477s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 unknown NOTIFY pruub 131.585922241s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.7( v 71'1442 (0'0,71'1442] local-lis/les=70/71 n=8 ec=48/36 lis/c=70/56 les/c/f=71/57/0 sis=72 pruub=15.002313614s) [2] r=-1 lpr=72 pi=[56,72)/1 crt=43'1441 lcod 43'1441 unknown NOTIFY pruub 131.585754395s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.f( v 71'1442 (0'0,71'1442] local-lis/les=70/71 n=8 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.002227783s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 lcod 43'1441 active pruub 131.585815430s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.f( v 71'1442 (0'0,71'1442] local-lis/les=70/71 n=8 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.002147675s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 lcod 43'1441 unknown NOTIFY pruub 131.585815430s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 72 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.000947952s) [2] async=[2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 active pruub 131.585632324s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:34 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 72 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=70/71 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72 pruub=15.000910759s) [2] r=-1 lpr=72 pi=[55,72)/1 crt=43'1441 unknown NOTIFY pruub 131.585632324s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:34 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 72 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=71/72 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:34 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 72 pg[9.18( v 43'1441 (0'0,43'1441] local-lis/les=71/72 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=71) [2]/[1] async=[2] r=0 lpr=71 pi=[48,71)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:34 compute-0 ceph-mon[75071]: 8.0 scrub starts
Dec 13 03:46:34 compute-0 ceph-mon[75071]: 8.0 scrub ok
Dec 13 03:46:34 compute-0 ceph-mon[75071]: osdmap e72: 3 total, 3 up, 3 in
Dec 13 03:46:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Dec 13 03:46:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Dec 13 03:46:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Dec 13 03:46:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Dec 13 03:46:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Dec 13 03:46:35 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 73 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=71/72 n=8 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73 pruub=15.002012253s) [2] async=[2] r=-1 lpr=73 pi=[48,73)/1 crt=43'1441 lcod 0'0 active pruub 126.000335693s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:35 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 73 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=71/72 n=8 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73 pruub=15.001916885s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 126.000335693s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:35 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 73 pg[9.18( v 72'1442 (0'0,72'1442] local-lis/les=71/72 n=7 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73 pruub=15.001811028s) [2] async=[2] r=-1 lpr=73 pi=[48,73)/1 crt=43'1441 lcod 43'1441 active pruub 126.000350952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:35 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 73 pg[9.18( v 72'1442 (0'0,72'1442] local-lis/les=71/72 n=7 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73 pruub=15.001726151s) [2] r=-1 lpr=73 pi=[48,73)/1 crt=43'1441 lcod 43'1441 unknown NOTIFY pruub 126.000350952s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.18( v 72'1442 (0'0,72'1442] local-lis/les=0/0 n=7 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.18( v 72'1442 (0'0,72'1442] local-lis/les=0/0 n=7 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.17( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.7( v 71'1442 (0'0,71'1442] local-lis/les=72/73 n=8 ec=48/36 lis/c=70/56 les/c/f=71/57/0 sis=72) [2] r=0 lpr=72 pi=[56,72)/1 crt=71'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:35 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 73 pg[9.f( v 71'1442 (0'0,71'1442] local-lis/les=72/73 n=8 ec=48/36 lis/c=70/55 les/c/f=71/56/0 sis=72) [2] r=0 lpr=72 pi=[55,72)/1 crt=71'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:35 compute-0 ceph-mon[75071]: 2.10 scrub starts
Dec 13 03:46:35 compute-0 ceph-mon[75071]: 2.10 scrub ok
Dec 13 03:46:35 compute-0 ceph-mon[75071]: pgmap v148: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 46 KiB/s rd, 1023 B/s wr, 86 op/s; 138 B/s, 4 objects/s recovering
Dec 13 03:46:35 compute-0 ceph-mon[75071]: 7.1f scrub starts
Dec 13 03:46:35 compute-0 ceph-mon[75071]: 7.1f scrub ok
Dec 13 03:46:35 compute-0 ceph-mon[75071]: osdmap e73: 3 total, 3 up, 3 in
Dec 13 03:46:36 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.b scrub starts
Dec 13 03:46:36 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.b scrub ok
Dec 13 03:46:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v151: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 11 op/s
Dec 13 03:46:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Dec 13 03:46:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Dec 13 03:46:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Dec 13 03:46:36 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 74 pg[9.8( v 43'1441 (0'0,43'1441] local-lis/les=73/74 n=8 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:36 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 74 pg[9.18( v 72'1442 (0'0,72'1442] local-lis/les=73/74 n=7 ec=48/36 lis/c=71/48 les/c/f=72/49/0 sis=73) [2] r=0 lpr=73 pi=[48,73)/1 crt=72'1442 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:36 compute-0 boring_brahmagupta[98570]: could not fetch user info: no user info saved
Dec 13 03:46:36 compute-0 systemd[1]: libpod-983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f.scope: Deactivated successfully.
Dec 13 03:46:36 compute-0 podman[98555]: 2025-12-13 03:46:36.755629034 +0000 UTC m=+3.935723750 container died 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:46:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d1f88aff1e2c41bf4dfe67cfa11b77e9fd66316f52467f42b543ba927d07c7-merged.mount: Deactivated successfully.
Dec 13 03:46:36 compute-0 podman[98555]: 2025-12-13 03:46:36.895022582 +0000 UTC m=+4.075117298 container remove 983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f (image=quay.io/ceph/ceph:v20, name=boring_brahmagupta, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:46:36 compute-0 systemd[1]: libpod-conmon-983a6c6827e242992e2143bb9c5def945a49525eadc6f55b91d047dbf445707f.scope: Deactivated successfully.
Dec 13 03:46:36 compute-0 sudo[98552]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:37 compute-0 sudo[98689]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqzsuidkzbetftikhabhhndawhwrittq ; /usr/bin/python3'
Dec 13 03:46:37 compute-0 sudo[98689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:46:37 compute-0 python3[98691]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:46:37 compute-0 podman[98692]: 2025-12-13 03:46:37.334655909 +0000 UTC m=+0.043629390 container create efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:37 compute-0 systemd[1]: Started libpod-conmon-efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82.scope.
Dec 13 03:46:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee189794f6565e3da7904a9f7456841d13b64dfb2b65ba69e93be2a42d0d517a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee189794f6565e3da7904a9f7456841d13b64dfb2b65ba69e93be2a42d0d517a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:37 compute-0 podman[98692]: 2025-12-13 03:46:37.312499555 +0000 UTC m=+0.021473056 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Dec 13 03:46:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:37 compute-0 podman[98692]: 2025-12-13 03:46:37.84851915 +0000 UTC m=+0.557492661 container init efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:46:37 compute-0 podman[98692]: 2025-12-13 03:46:37.854743624 +0000 UTC m=+0.563717105 container start efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:46:37 compute-0 ceph-mon[75071]: 3.b scrub starts
Dec 13 03:46:37 compute-0 ceph-mon[75071]: 3.b scrub ok
Dec 13 03:46:37 compute-0 ceph-mon[75071]: pgmap v151: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 11 op/s
Dec 13 03:46:37 compute-0 ceph-mon[75071]: osdmap e74: 3 total, 3 up, 3 in
Dec 13 03:46:37 compute-0 podman[98692]: 2025-12-13 03:46:37.921022611 +0000 UTC m=+0.629996112 container attach efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:46:38 compute-0 exciting_meitner[98707]: {
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "user_id": "openstack",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "display_name": "openstack",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "email": "",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "suspended": 0,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "max_buckets": 1000,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "subusers": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "keys": [
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         {
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:             "user": "openstack",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:             "access_key": "PTE4SXC4FHW0A412OZU8",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:             "secret_key": "mMvEhTpFMyt1JpVwv2taZkCnQJ5i2gIdDRaXEMmN",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:             "active": true,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:             "create_date": "2025-12-13T03:46:38.123626Z"
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         }
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     ],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "swift_keys": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "caps": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "op_mask": "read, write, delete",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "default_placement": "",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "default_storage_class": "",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "placement_tags": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "bucket_quota": {
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "enabled": false,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "check_on_raw": false,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_size": -1,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_size_kb": 0,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_objects": -1
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     },
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "user_quota": {
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "enabled": false,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "check_on_raw": false,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_size": -1,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_size_kb": 0,
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:         "max_objects": -1
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     },
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "temp_url_keys": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "type": "rgw",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "mfa_ids": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "account_id": "",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "path": "/",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "create_date": "2025-12-13T03:46:38.123319Z",
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "tags": [],
Dec 13 03:46:38 compute-0 exciting_meitner[98707]:     "group_ids": []
Dec 13 03:46:38 compute-0 exciting_meitner[98707]: }
Dec 13 03:46:38 compute-0 exciting_meitner[98707]: 
Dec 13 03:46:38 compute-0 systemd[1]: libpod-efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82.scope: Deactivated successfully.
Dec 13 03:46:38 compute-0 conmon[98707]: conmon efe30eda5356e39d820e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82.scope/container/memory.events
Dec 13 03:46:38 compute-0 podman[98692]: 2025-12-13 03:46:38.15860236 +0000 UTC m=+0.867575831 container died efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 03:46:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee189794f6565e3da7904a9f7456841d13b64dfb2b65ba69e93be2a42d0d517a-merged.mount: Deactivated successfully.
Dec 13 03:46:38 compute-0 podman[98692]: 2025-12-13 03:46:38.196987384 +0000 UTC m=+0.905960865 container remove efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82 (image=quay.io/ceph/ceph:v20, name=exciting_meitner, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:38 compute-0 systemd[1]: libpod-conmon-efe30eda5356e39d820e04def9c5ee6d7ed83f5465d30405d1ab79c88ed53c82.scope: Deactivated successfully.
Dec 13 03:46:38 compute-0 sudo[98689]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:38 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Dec 13 03:46:38 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Dec 13 03:46:38 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.e scrub starts
Dec 13 03:46:38 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.e scrub ok
Dec 13 03:46:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v153: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 17 op/s
Dec 13 03:46:38 compute-0 ceph-mon[75071]: 5.3 scrub starts
Dec 13 03:46:38 compute-0 ceph-mon[75071]: 5.3 scrub ok
Dec 13 03:46:39 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Dec 13 03:46:39 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Dec 13 03:46:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Dec 13 03:46:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Dec 13 03:46:39 compute-0 ceph-mon[75071]: 2.e scrub starts
Dec 13 03:46:39 compute-0 ceph-mon[75071]: 2.e scrub ok
Dec 13 03:46:39 compute-0 ceph-mon[75071]: pgmap v153: 274 pgs: 4 remapped+peering, 270 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 17 op/s
Dec 13 03:46:39 compute-0 ceph-mon[75071]: 10.17 scrub starts
Dec 13 03:46:39 compute-0 ceph-mon[75071]: 10.17 scrub ok
Dec 13 03:46:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Dec 13 03:46:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Dec 13 03:46:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:46:40
Dec 13 03:46:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:46:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Some PGs (0.014599) are inactive; try again later
Dec 13 03:46:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v154: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 341 B/s wr, 39 op/s; 281 B/s, 7 objects/s recovering
Dec 13 03:46:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 13 03:46:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 03:46:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Dec 13 03:46:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 03:46:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Dec 13 03:46:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 03:46:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 03:46:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Dec 13 03:46:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Dec 13 03:46:40 compute-0 ceph-mon[75071]: 3.4 scrub starts
Dec 13 03:46:40 compute-0 ceph-mon[75071]: 3.4 scrub ok
Dec 13 03:46:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 03:46:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Dec 13 03:46:41 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 75 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=75 pruub=9.144985199s) [0] r=-1 lpr=75 pi=[52,75)/1 crt=36'39 lcod 0'0 active pruub 125.673072815s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:41 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 75 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=52/53 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=75 pruub=9.144834518s) [0] r=-1 lpr=75 pi=[52,75)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 125.673072815s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:41 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 75 pg[6.9( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=75) [0] r=0 lpr=75 pi=[52,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:41 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.a scrub starts
Dec 13 03:46:41 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.a scrub ok
Dec 13 03:46:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Dec 13 03:46:41 compute-0 ceph-mon[75071]: 10.5 scrub starts
Dec 13 03:46:41 compute-0 ceph-mon[75071]: 10.5 scrub ok
Dec 13 03:46:41 compute-0 ceph-mon[75071]: pgmap v154: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 341 B/s wr, 39 op/s; 281 B/s, 7 objects/s recovering
Dec 13 03:46:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 03:46:41 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Dec 13 03:46:41 compute-0 ceph-mon[75071]: osdmap e75: 3 total, 3 up, 3 in
Dec 13 03:46:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Dec 13 03:46:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Dec 13 03:46:41 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 76 pg[6.9( v 36'39 (0'0,36'39] local-lis/les=75/76 n=1 ec=46/24 lis/c=52/52 les/c/f=53/53/0 sis=75) [0] r=0 lpr=75 pi=[52,75)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:42 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Dec 13 03:46:42 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:46:42 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.c scrub starts
Dec 13 03:46:42 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.c scrub ok
Dec 13 03:46:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v157: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 341 B/s wr, 39 op/s; 281 B/s, 7 objects/s recovering
Dec 13 03:46:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 03:46:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 03:46:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Dec 13 03:46:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 03:46:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e76 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Dec 13 03:46:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 03:46:42 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 03:46:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Dec 13 03:46:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Dec 13 03:46:42 compute-0 ceph-mon[75071]: 5.a scrub starts
Dec 13 03:46:42 compute-0 ceph-mon[75071]: 5.a scrub ok
Dec 13 03:46:42 compute-0 ceph-mon[75071]: osdmap e76: 3 total, 3 up, 3 in
Dec 13 03:46:42 compute-0 ceph-mon[75071]: 7.4 scrub starts
Dec 13 03:46:42 compute-0 ceph-mon[75071]: 7.4 scrub ok
Dec 13 03:46:42 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 03:46:42 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Dec 13 03:46:43 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.b scrub starts
Dec 13 03:46:43 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.b scrub ok
Dec 13 03:46:43 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 77 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=54/55 n=1 ec=46/24 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.938514709s) [0] r=-1 lpr=77 pi=[54,77)/1 crt=36'39 lcod 0'0 active pruub 127.714454651s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:43 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 77 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=54/55 n=1 ec=46/24 lis/c=54/54 les/c/f=55/55/0 sis=77 pruub=8.938472748s) [0] r=-1 lpr=77 pi=[54,77)/1 crt=36'39 lcod 0'0 unknown NOTIFY pruub 127.714454651s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:43 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 77 pg[6.a( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=54/54 les/c/f=55/55/0 sis=77) [0] r=0 lpr=77 pi=[54,77)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Dec 13 03:46:43 compute-0 ceph-mon[75071]: 2.c scrub starts
Dec 13 03:46:43 compute-0 ceph-mon[75071]: 2.c scrub ok
Dec 13 03:46:43 compute-0 ceph-mon[75071]: pgmap v157: 274 pgs: 274 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 341 B/s wr, 39 op/s; 281 B/s, 7 objects/s recovering
Dec 13 03:46:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 03:46:43 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Dec 13 03:46:43 compute-0 ceph-mon[75071]: osdmap e77: 3 total, 3 up, 3 in
Dec 13 03:46:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Dec 13 03:46:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Dec 13 03:46:44 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 78 pg[6.a( v 36'39 (0'0,36'39] local-lis/les=77/78 n=1 ec=46/24 lis/c=54/54 les/c/f=55/55/0 sis=77) [0] r=0 lpr=77 pi=[54,77)/1 crt=36'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:44 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Dec 13 03:46:44 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Dec 13 03:46:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v160: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 13 03:46:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 03:46:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Dec 13 03:46:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 03:46:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Dec 13 03:46:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Dec 13 03:46:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Dec 13 03:46:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 03:46:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 03:46:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Dec 13 03:46:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Dec 13 03:46:45 compute-0 ceph-mon[75071]: 5.b scrub starts
Dec 13 03:46:45 compute-0 ceph-mon[75071]: 5.b scrub ok
Dec 13 03:46:45 compute-0 ceph-mon[75071]: osdmap e78: 3 total, 3 up, 3 in
Dec 13 03:46:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 03:46:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Dec 13 03:46:45 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 79 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=79 pruub=14.107523918s) [1] r=-1 lpr=79 pi=[58,79)/1 crt=36'39 active pruub 141.446731567s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:45 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 79 pg[6.b( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=79 pruub=14.107433319s) [1] r=-1 lpr=79 pi=[58,79)/1 crt=36'39 unknown NOTIFY pruub 141.446731567s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:45 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 79 pg[6.b( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=79) [1] r=0 lpr=79 pi=[58,79)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Dec 13 03:46:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Dec 13 03:46:46 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Dec 13 03:46:46 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Dec 13 03:46:46 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Dec 13 03:46:46 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Dec 13 03:46:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Dec 13 03:46:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v162: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 13 03:46:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 03:46:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Dec 13 03:46:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 03:46:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Dec 13 03:46:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Dec 13 03:46:46 compute-0 ceph-mon[75071]: 10.3 scrub starts
Dec 13 03:46:46 compute-0 ceph-mon[75071]: 10.3 scrub ok
Dec 13 03:46:46 compute-0 ceph-mon[75071]: pgmap v160: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:46 compute-0 ceph-mon[75071]: 7.0 scrub starts
Dec 13 03:46:46 compute-0 ceph-mon[75071]: 7.0 scrub ok
Dec 13 03:46:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 03:46:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Dec 13 03:46:46 compute-0 ceph-mon[75071]: osdmap e79: 3 total, 3 up, 3 in
Dec 13 03:46:46 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 80 pg[6.b( v 36'39 lc 0'0 (0'0,36'39] local-lis/les=79/80 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=79) [1] r=0 lpr=79 pi=[58,79)/1 crt=36'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:47 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.b scrub starts
Dec 13 03:46:47 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.b scrub ok
Dec 13 03:46:47 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Dec 13 03:46:47 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Dec 13 03:46:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Dec 13 03:46:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 03:46:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 03:46:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Dec 13 03:46:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 3.0 scrub starts
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 3.0 scrub ok
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 8.10 scrub starts
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 8.10 scrub ok
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 5.17 scrub starts
Dec 13 03:46:47 compute-0 ceph-mon[75071]: 5.17 scrub ok
Dec 13 03:46:47 compute-0 ceph-mon[75071]: pgmap v162: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 03:46:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Dec 13 03:46:47 compute-0 ceph-mon[75071]: osdmap e80: 3 total, 3 up, 3 in
Dec 13 03:46:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Dec 13 03:46:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Dec 13 03:46:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v165: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 13 03:46:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 03:46:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Dec 13 03:46:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 03:46:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Dec 13 03:46:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 03:46:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 03:46:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Dec 13 03:46:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Dec 13 03:46:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 82 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=62/63 n=1 ec=46/24 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.677890778s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=36'39 active pruub 143.560836792s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:48 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 82 pg[6.d( v 36'39 (0'0,36'39] local-lis/les=62/63 n=1 ec=46/24 lis/c=62/62 les/c/f=63/63/0 sis=82 pruub=12.677835464s) [1] r=-1 lpr=82 pi=[62,82)/1 crt=36'39 unknown NOTIFY pruub 143.560836792s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:48 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 82 pg[6.d( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:48 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 81 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81 pruub=15.218059540s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=43'1441 lcod 0'0 active pruub 139.517776489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:48 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 82 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81 pruub=15.218038559s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 139.517776489s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:48 compute-0 ceph-mon[75071]: 8.b scrub starts
Dec 13 03:46:48 compute-0 ceph-mon[75071]: 8.b scrub ok
Dec 13 03:46:48 compute-0 ceph-mon[75071]: 2.0 scrub starts
Dec 13 03:46:48 compute-0 ceph-mon[75071]: 2.0 scrub ok
Dec 13 03:46:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 03:46:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Dec 13 03:46:48 compute-0 ceph-mon[75071]: osdmap e81: 3 total, 3 up, 3 in
Dec 13 03:46:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 03:46:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Dec 13 03:46:48 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 81 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81 pruub=15.218147278s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=74'1444 lcod 74'1444 active pruub 139.518798828s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:48 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 82 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81 pruub=15.218112946s) [2] r=-1 lpr=81 pi=[48,81)/1 crt=74'1444 lcod 74'1444 unknown NOTIFY pruub 139.518798828s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 82 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81) [2] r=0 lpr=82 pi=[48,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:48 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 82 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=81) [2] r=0 lpr=82 pi=[48,81)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:49 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.d scrub starts
Dec 13 03:46:49 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.d scrub ok
Dec 13 03:46:49 compute-0 sudo[98803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:46:49 compute-0 sudo[98803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:49 compute-0 sudo[98803]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Dec 13 03:46:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Dec 13 03:46:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Dec 13 03:46:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[48,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:50 compute-0 ceph-mon[75071]: 3.2 scrub starts
Dec 13 03:46:50 compute-0 ceph-mon[75071]: 3.2 scrub ok
Dec 13 03:46:50 compute-0 ceph-mon[75071]: pgmap v165: 274 pgs: 274 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:46:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 03:46:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Dec 13 03:46:50 compute-0 ceph-mon[75071]: osdmap e82: 3 total, 3 up, 3 in
Dec 13 03:46:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 83 pg[9.c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[48,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[48,83)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 83 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=-1 lpr=83 pi=[48,83)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 83 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=0 lpr=83 pi=[48,83)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 83 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=48/49 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=0 lpr=83 pi=[48,83)/1 crt=43'1441 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 83 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=0 lpr=83 pi=[48,83)/1 crt=74'1444 lcod 74'1444 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 83 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=48/49 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] r=0 lpr=83 pi=[48,83)/1 crt=74'1444 lcod 74'1444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:50 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 83 pg[6.d( v 36'39 lc 35'13 (0'0,36'39] local-lis/les=82/83 n=1 ec=46/24 lis/c=62/62 les/c/f=63/63/0 sis=82) [1] r=0 lpr=82 pi=[62,82)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:50 compute-0 sudo[98828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:46:50 compute-0 sudo[98828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v168: 274 pgs: 2 unknown, 272 active+clean; 460 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 13 03:46:50 compute-0 sudo[98828]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:46:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:46:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:50 compute-0 sudo[98883]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:46:50 compute-0 sudo[98883]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:50 compute-0 sudo[98883]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:50 compute-0 sudo[98908]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:46:50 compute-0 sudo[98908]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:50 compute-0 podman[98945]: 2025-12-13 03:46:50.995642571 +0000 UTC m=+0.036147348 container create 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:46:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Dec 13 03:46:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Dec 13 03:46:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Dec 13 03:46:51 compute-0 ceph-mon[75071]: 7.d scrub starts
Dec 13 03:46:51 compute-0 ceph-mon[75071]: 7.d scrub ok
Dec 13 03:46:51 compute-0 ceph-mon[75071]: osdmap e83: 3 total, 3 up, 3 in
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:46:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:46:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 84 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=83/84 n=8 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[48,83)/1 crt=43'1441 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:51 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 84 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=83/84 n=7 ec=48/36 lis/c=48/48 les/c/f=49/49/0 sis=83) [2]/[1] async=[2] r=0 lpr=83 pi=[48,83)/1 crt=74'1445 lcod 74'1444 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:51 compute-0 systemd[1]: Started libpod-conmon-0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57.scope.
Dec 13 03:46:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:51.069796942 +0000 UTC m=+0.110301749 container init 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:51.077127509 +0000 UTC m=+0.117632286 container start 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:50.980640718 +0000 UTC m=+0.021145515 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:51.080520049 +0000 UTC m=+0.121024846 container attach 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:46:51 compute-0 crazy_euclid[98962]: 167 167
Dec 13 03:46:51 compute-0 systemd[1]: libpod-0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57.scope: Deactivated successfully.
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:51.08289763 +0000 UTC m=+0.123402397 container died 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8120c5b46bdeb06cda836ffc8458ad7fac95ab18cd6d1d6e4f18760ba24958a-merged.mount: Deactivated successfully.
Dec 13 03:46:51 compute-0 podman[98945]: 2025-12-13 03:46:51.122581282 +0000 UTC m=+0.163086059 container remove 0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_euclid, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:46:51 compute-0 systemd[1]: libpod-conmon-0b56d8b812d1f99aa3445c48fd2f621100b96efe5496aff8cbe055e6ec991f57.scope: Deactivated successfully.
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.378191971348825e-06 of space, bias 4.0, pg target 0.00165383036561859 quantized to 16 (current 16)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:46:51 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Dec 13 03:46:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Dec 13 03:46:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:46:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Dec 13 03:46:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.268659627 +0000 UTC m=+0.047514864 container create 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:46:51 compute-0 systemd[1]: Started libpod-conmon-1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4.scope.
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.246830332 +0000 UTC m=+0.025685579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.363100567 +0000 UTC m=+0.141955804 container init 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.370510466 +0000 UTC m=+0.149365703 container start 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.374147404 +0000 UTC m=+0.153002641 container attach 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 03:46:51 compute-0 cool_fermat[99003]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:46:51 compute-0 cool_fermat[99003]: --> All data devices are unavailable
Dec 13 03:46:51 compute-0 systemd[1]: libpod-1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4.scope: Deactivated successfully.
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.824430976 +0000 UTC m=+0.603286213 container died 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:46:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-678af6ea7c37f48a0f7c2484aae2b6be8b6160c64f8a1af383483971ee545d12-merged.mount: Deactivated successfully.
Dec 13 03:46:51 compute-0 podman[98986]: 2025-12-13 03:46:51.864202731 +0000 UTC m=+0.643057968 container remove 1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_fermat, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:46:51 compute-0 systemd[1]: libpod-conmon-1432ed1288a903c1e12b05f0b7c284d0d5ae61ff215bb1627e23f1a435293ec4.scope: Deactivated successfully.
Dec 13 03:46:51 compute-0 sudo[98908]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:51 compute-0 sudo[99034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:46:51 compute-0 sudo[99034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:51 compute-0 sudo[99034]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:52 compute-0 sudo[99059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:46:52 compute-0 sudo[99059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Dec 13 03:46:52 compute-0 ceph-mon[75071]: pgmap v168: 274 pgs: 2 unknown, 272 active+clean; 460 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 13 03:46:52 compute-0 ceph-mon[75071]: osdmap e84: 3 total, 3 up, 3 in
Dec 13 03:46:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Dec 13 03:46:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:46:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Dec 13 03:46:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Dec 13 03:46:52 compute-0 ceph-mgr[75360]: [progress INFO root] update: starting ev 1924d22f-3cc3-4bef-a003-90af30839280 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 03:46:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 85 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=83/84 n=8 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85 pruub=14.974166870s) [2] async=[2] r=-1 lpr=85 pi=[48,85)/1 crt=43'1441 lcod 0'0 active pruub 142.337738037s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 85 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=83/84 n=8 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85 pruub=14.974079132s) [2] r=-1 lpr=85 pi=[48,85)/1 crt=43'1441 lcod 0'0 unknown NOTIFY pruub 142.337738037s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 85 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=83/84 n=7 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85 pruub=14.975431442s) [2] async=[2] r=-1 lpr=85 pi=[48,85)/1 crt=74'1445 lcod 74'1444 active pruub 142.340438843s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:52 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 85 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=83/84 n=7 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85 pruub=14.975367546s) [2] r=-1 lpr=85 pi=[48,85)/1 crt=74'1445 lcod 74'1444 unknown NOTIFY pruub 142.340438843s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:46:52 compute-0 ceph-mgr[75360]: [progress INFO root] complete: finished ev 1924d22f-3cc3-4bef-a003-90af30839280 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Dec 13 03:46:52 compute-0 ceph-mgr[75360]: [progress INFO root] Completed event 1924d22f-3cc3-4bef-a003-90af30839280 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Dec 13 03:46:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 85 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 pct=0'0 crt=74'1445 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 85 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 crt=74'1445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 85 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 85 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=8 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.314936586 +0000 UTC m=+0.038377594 container create e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:46:52 compute-0 systemd[1]: Started libpod-conmon-e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc.scope.
Dec 13 03:46:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:52 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Dec 13 03:46:52 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.373603029 +0000 UTC m=+0.097044067 container init e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.381327378 +0000 UTC m=+0.104768386 container start e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:46:52 compute-0 ceph-mgr[75360]: [progress INFO root] Writing back 16 completed events
Dec 13 03:46:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Dec 13 03:46:52 compute-0 zen_bardeen[99109]: 167 167
Dec 13 03:46:52 compute-0 systemd[1]: libpod-e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc.scope: Deactivated successfully.
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.387444558 +0000 UTC m=+0.110885586 container attach e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.38814622 +0000 UTC m=+0.111587248 container died e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:46:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.297501561 +0000 UTC m=+0.020942589 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-33394361af6fdcb6f0c4029ed714b2305bea47d4a17e2ab298203cf29259b549-merged.mount: Deactivated successfully.
Dec 13 03:46:52 compute-0 podman[99095]: 2025-12-13 03:46:52.426655556 +0000 UTC m=+0.150096564 container remove e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:46:52 compute-0 systemd[1]: libpod-conmon-e238f51f5a7b4fd3d9b930921aaf83dde2f84d1006f38d3c19d439184270bbfc.scope: Deactivated successfully.
Dec 13 03:46:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v171: 274 pgs: 2 unknown, 272 active+clean; 460 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 13 03:46:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Dec 13 03:46:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:46:52 compute-0 podman[99136]: 2025-12-13 03:46:52.57163281 +0000 UTC m=+0.037465448 container create 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:52 compute-0 systemd[1]: Started libpod-conmon-6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858.scope.
Dec 13 03:46:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/776fbbac192f2d51b7b83e6614b5c53663533dded7adf0d5eaebcfe0b1d316c7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/776fbbac192f2d51b7b83e6614b5c53663533dded7adf0d5eaebcfe0b1d316c7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/776fbbac192f2d51b7b83e6614b5c53663533dded7adf0d5eaebcfe0b1d316c7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/776fbbac192f2d51b7b83e6614b5c53663533dded7adf0d5eaebcfe0b1d316c7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:52 compute-0 podman[99136]: 2025-12-13 03:46:52.637773284 +0000 UTC m=+0.103605942 container init 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:46:52 compute-0 podman[99136]: 2025-12-13 03:46:52.64608528 +0000 UTC m=+0.111917918 container start 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:52 compute-0 podman[99136]: 2025-12-13 03:46:52.649985584 +0000 UTC m=+0.115818252 container attach 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:52 compute-0 podman[99136]: 2025-12-13 03:46:52.555562445 +0000 UTC m=+0.021395103 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:52 compute-0 sshd-session[99158]: Accepted publickey for zuul from 192.168.122.30 port 53724 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:46:52 compute-0 systemd-logind[796]: New session 34 of user zuul.
Dec 13 03:46:52 compute-0 systemd[1]: Started Session 34 of User zuul.
Dec 13 03:46:52 compute-0 sshd-session[99158]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:46:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:52 compute-0 elated_benz[99153]: {
Dec 13 03:46:52 compute-0 elated_benz[99153]:     "0": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:         {
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "devices": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "/dev/loop3"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             ],
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_name": "ceph_lv0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_size": "21470642176",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "name": "ceph_lv0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "tags": {
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_name": "ceph",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.crush_device_class": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.encrypted": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.objectstore": "bluestore",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_id": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.vdo": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.with_tpm": "0"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             },
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "vg_name": "ceph_vg0"
Dec 13 03:46:52 compute-0 elated_benz[99153]:         }
Dec 13 03:46:52 compute-0 elated_benz[99153]:     ],
Dec 13 03:46:52 compute-0 elated_benz[99153]:     "1": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:         {
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "devices": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "/dev/loop4"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             ],
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_name": "ceph_lv1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_size": "21470642176",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "name": "ceph_lv1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "tags": {
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_name": "ceph",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.crush_device_class": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.encrypted": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.objectstore": "bluestore",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_id": "1",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.vdo": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.with_tpm": "0"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             },
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "vg_name": "ceph_vg1"
Dec 13 03:46:52 compute-0 elated_benz[99153]:         }
Dec 13 03:46:52 compute-0 elated_benz[99153]:     ],
Dec 13 03:46:52 compute-0 elated_benz[99153]:     "2": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:         {
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "devices": [
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "/dev/loop5"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             ],
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_name": "ceph_lv2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_size": "21470642176",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "name": "ceph_lv2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "tags": {
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.cluster_name": "ceph",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.crush_device_class": "",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.encrypted": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.objectstore": "bluestore",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osd_id": "2",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.vdo": "0",
Dec 13 03:46:52 compute-0 elated_benz[99153]:                 "ceph.with_tpm": "0"
Dec 13 03:46:52 compute-0 elated_benz[99153]:             },
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "type": "block",
Dec 13 03:46:52 compute-0 elated_benz[99153]:             "vg_name": "ceph_vg2"
Dec 13 03:46:52 compute-0 elated_benz[99153]:         }
Dec 13 03:46:52 compute-0 elated_benz[99153]:     ]
Dec 13 03:46:52 compute-0 elated_benz[99153]: }
Dec 13 03:46:52 compute-0 systemd[1]: libpod-6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858.scope: Deactivated successfully.
Dec 13 03:46:53 compute-0 podman[99190]: 2025-12-13 03:46:53.028307751 +0000 UTC m=+0.028054780 container died 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:46:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Dec 13 03:46:53 compute-0 ceph-mon[75071]: 8.7 scrub starts
Dec 13 03:46:53 compute-0 ceph-mon[75071]: 8.7 scrub ok
Dec 13 03:46:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Dec 13 03:46:53 compute-0 ceph-mon[75071]: osdmap e85: 3 total, 3 up, 3 in
Dec 13 03:46:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Dec 13 03:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-776fbbac192f2d51b7b83e6614b5c53663533dded7adf0d5eaebcfe0b1d316c7-merged.mount: Deactivated successfully.
Dec 13 03:46:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:46:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Dec 13 03:46:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Dec 13 03:46:53 compute-0 podman[99190]: 2025-12-13 03:46:53.073780804 +0000 UTC m=+0.073527823 container remove 6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:46:53 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 86 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=85/86 n=7 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 crt=74'1445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:53 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 86 pg[9.c( v 43'1441 (0'0,43'1441] local-lis/les=85/86 n=8 ec=48/36 lis/c=83/48 les/c/f=84/49/0 sis=85) [2] r=0 lpr=85 pi=[48,85)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:53 compute-0 systemd[1]: libpod-conmon-6caa0102ba30e336eea5c482de51a9a398a87e55bc8b09261877d9594ec7f858.scope: Deactivated successfully.
Dec 13 03:46:53 compute-0 sudo[99059]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:53 compute-0 sudo[99233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:46:53 compute-0 sudo[99233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:53 compute-0 sudo[99233]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:53 compute-0 sudo[99258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:46:53 compute-0 sudo[99258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.d scrub starts
Dec 13 03:46:53 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 86 pg[11.0( v 74'2 (0'0,74'2] local-lis/les=40/41 n=2 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=86 pruub=10.881519318s) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 74'1 mlcod 74'1 active pruub 139.472991943s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:46:53 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 86 pg[11.0( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=86 pruub=10.881519318s) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 74'1 mlcod 0'0 unknown pruub 139.472991943s@ mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.d scrub ok
Dec 13 03:46:53 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Dec 13 03:46:53 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.491797674 +0000 UTC m=+0.037963653 container create 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:46:53 compute-0 systemd[1]: Started libpod-conmon-6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c.scope.
Dec 13 03:46:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.565414818 +0000 UTC m=+0.111580817 container init 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.473838593 +0000 UTC m=+0.020004602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.574230089 +0000 UTC m=+0.120396068 container start 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.577433103 +0000 UTC m=+0.123599082 container attach 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:46:53 compute-0 systemd[1]: libpod-6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c.scope: Deactivated successfully.
Dec 13 03:46:53 compute-0 gallant_borg[99377]: 167 167
Dec 13 03:46:53 compute-0 conmon[99377]: conmon 6c53019852eae7225881 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c.scope/container/memory.events
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.581511654 +0000 UTC m=+0.127677653 container died 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:46:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-50e607f1bf945768cb1b357f3cfd53e45ed486c6e573e1d659c3624a92e61ab7-merged.mount: Deactivated successfully.
Dec 13 03:46:53 compute-0 podman[99340]: 2025-12-13 03:46:53.617663881 +0000 UTC m=+0.163829890 container remove 6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_borg, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:46:53 compute-0 systemd[1]: libpod-conmon-6c53019852eae7225881be44419a0113680599adf9fd674a3a22c460069d559c.scope: Deactivated successfully.
Dec 13 03:46:53 compute-0 podman[99434]: 2025-12-13 03:46:53.776786662 +0000 UTC m=+0.047387471 container create 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:46:53 compute-0 systemd[1]: Started libpod-conmon-7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f.scope.
Dec 13 03:46:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a77376954a376e21bece933f9ce2384462d3e01d0aba7972cb540ab2b91e04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a77376954a376e21bece933f9ce2384462d3e01d0aba7972cb540ab2b91e04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a77376954a376e21bece933f9ce2384462d3e01d0aba7972cb540ab2b91e04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48a77376954a376e21bece933f9ce2384462d3e01d0aba7972cb540ab2b91e04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:46:53 compute-0 podman[99434]: 2025-12-13 03:46:53.753641359 +0000 UTC m=+0.024242198 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:46:53 compute-0 podman[99434]: 2025-12-13 03:46:53.84982716 +0000 UTC m=+0.120427969 container init 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 03:46:53 compute-0 podman[99434]: 2025-12-13 03:46:53.857742894 +0000 UTC m=+0.128343703 container start 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:46:53 compute-0 podman[99434]: 2025-12-13 03:46:53.861740053 +0000 UTC m=+0.132340872 container attach 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:46:53 compute-0 python3.9[99421]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:46:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Dec 13 03:46:54 compute-0 ceph-mon[75071]: 5.0 scrub starts
Dec 13 03:46:54 compute-0 ceph-mon[75071]: 5.0 scrub ok
Dec 13 03:46:54 compute-0 ceph-mon[75071]: pgmap v171: 274 pgs: 2 unknown, 272 active+clean; 460 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Dec 13 03:46:54 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Dec 13 03:46:54 compute-0 ceph-mon[75071]: osdmap e86: 3 total, 3 up, 3 in
Dec 13 03:46:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Dec 13 03:46:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.17( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.15( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.19( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.16( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.14( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.13( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.12( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.11( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.10( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.f( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.b( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.e( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.d( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.9( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.3( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.2( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=1 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.c( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.8( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.a( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=1 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.4( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.5( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.6( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.7( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.18( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1a( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1b( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1c( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1d( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1e( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.17( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1f( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=40/41 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.15( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.13( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.12( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.11( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.14( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.19( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.10( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.9( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.3( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.d( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.2( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.c( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.0( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=40/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 74'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.a( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.5( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.4( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.7( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.6( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.8( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.16( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1a( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.18( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1d( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1c( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 87 pg[11.1( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=40/40 les/c/f=41/41/0 sis=86) [1] r=0 lpr=86 pi=[40,86)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:46:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v174: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 767 B/s wr, 62 op/s; 121 B/s, 3 objects/s recovering
Dec 13 03:46:54 compute-0 lvm[99569]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:46:54 compute-0 lvm[99573]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:46:54 compute-0 lvm[99573]: VG ceph_vg1 finished
Dec 13 03:46:54 compute-0 lvm[99569]: VG ceph_vg0 finished
Dec 13 03:46:54 compute-0 lvm[99582]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:46:54 compute-0 lvm[99582]: VG ceph_vg2 finished
Dec 13 03:46:54 compute-0 jovial_poincare[99451]: {}
Dec 13 03:46:54 compute-0 systemd[1]: libpod-7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f.scope: Deactivated successfully.
Dec 13 03:46:54 compute-0 podman[99434]: 2025-12-13 03:46:54.701015476 +0000 UTC m=+0.971616285 container died 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:46:54 compute-0 systemd[1]: libpod-7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f.scope: Consumed 1.259s CPU time.
Dec 13 03:46:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-48a77376954a376e21bece933f9ce2384462d3e01d0aba7972cb540ab2b91e04-merged.mount: Deactivated successfully.
Dec 13 03:46:54 compute-0 podman[99434]: 2025-12-13 03:46:54.740797951 +0000 UTC m=+1.011398760 container remove 7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_poincare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:46:54 compute-0 systemd[1]: libpod-conmon-7beb5dfcf35960b5880fa21ba99578e77031abbf6086bf6ae8a766fbd31ca55f.scope: Deactivated successfully.
Dec 13 03:46:54 compute-0 sudo[99258]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:46:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:46:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:54 compute-0 sudo[99661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:46:54 compute-0 sudo[99661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:46:54 compute-0 sudo[99661]: pam_unix(sudo:session): session closed for user root
Dec 13 03:46:55 compute-0 ceph-mon[75071]: 3.d scrub starts
Dec 13 03:46:55 compute-0 ceph-mon[75071]: 3.d scrub ok
Dec 13 03:46:55 compute-0 ceph-mon[75071]: 10.0 scrub starts
Dec 13 03:46:55 compute-0 ceph-mon[75071]: 10.0 scrub ok
Dec 13 03:46:55 compute-0 ceph-mon[75071]: osdmap e87: 3 total, 3 up, 3 in
Dec 13 03:46:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:46:55 compute-0 sudo[99788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vakpxbjlejgffoakbydhpmclydogbvwd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597614.8143394-32-66443010372953/AnsiballZ_command.py'
Dec 13 03:46:55 compute-0 sudo[99788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:46:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Dec 13 03:46:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Dec 13 03:46:55 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.c scrub starts
Dec 13 03:46:55 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.c scrub ok
Dec 13 03:46:55 compute-0 python3.9[99790]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:46:56 compute-0 ceph-mon[75071]: pgmap v174: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 767 B/s wr, 62 op/s; 121 B/s, 3 objects/s recovering
Dec 13 03:46:56 compute-0 ceph-mon[75071]: 3.c scrub starts
Dec 13 03:46:56 compute-0 ceph-mon[75071]: 3.c scrub ok
Dec 13 03:46:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.b scrub starts
Dec 13 03:46:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.b scrub ok
Dec 13 03:46:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 563 B/s wr, 46 op/s; 89 B/s, 2 objects/s recovering
Dec 13 03:46:57 compute-0 ceph-mon[75071]: 8.5 scrub starts
Dec 13 03:46:57 compute-0 ceph-mon[75071]: 8.5 scrub ok
Dec 13 03:46:57 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Dec 13 03:46:57 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Dec 13 03:46:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:46:58 compute-0 ceph-mon[75071]: 7.b scrub starts
Dec 13 03:46:58 compute-0 ceph-mon[75071]: 7.b scrub ok
Dec 13 03:46:58 compute-0 ceph-mon[75071]: pgmap v175: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 563 B/s wr, 46 op/s; 89 B/s, 2 objects/s recovering
Dec 13 03:46:58 compute-0 ceph-mon[75071]: 10.8 scrub starts
Dec 13 03:46:58 compute-0 ceph-mon[75071]: 10.8 scrub ok
Dec 13 03:46:58 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Dec 13 03:46:58 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Dec 13 03:46:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 478 B/s wr, 39 op/s; 76 B/s, 2 objects/s recovering
Dec 13 03:46:59 compute-0 ceph-mon[75071]: 2.1 scrub starts
Dec 13 03:46:59 compute-0 ceph-mon[75071]: 2.1 scrub ok
Dec 13 03:46:59 compute-0 ceph-mon[75071]: pgmap v176: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 478 B/s wr, 39 op/s; 76 B/s, 2 objects/s recovering
Dec 13 03:46:59 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Dec 13 03:46:59 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Dec 13 03:47:00 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Dec 13 03:47:00 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Dec 13 03:47:00 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.a scrub starts
Dec 13 03:47:00 compute-0 ceph-mon[75071]: 5.6 scrub starts
Dec 13 03:47:00 compute-0 ceph-mon[75071]: 5.6 scrub ok
Dec 13 03:47:00 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.a scrub ok
Dec 13 03:47:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 31 op/s; 60 B/s, 1 objects/s recovering
Dec 13 03:47:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 13 03:47:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 03:47:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Dec 13 03:47:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 03:47:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:47:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:47:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Dec 13 03:47:01 compute-0 ceph-mon[75071]: 7.14 scrub starts
Dec 13 03:47:01 compute-0 ceph-mon[75071]: 7.14 scrub ok
Dec 13 03:47:01 compute-0 ceph-mon[75071]: 10.a scrub starts
Dec 13 03:47:01 compute-0 ceph-mon[75071]: 10.a scrub ok
Dec 13 03:47:01 compute-0 ceph-mon[75071]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 255 B/s wr, 31 op/s; 60 B/s, 1 objects/s recovering
Dec 13 03:47:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 03:47:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Dec 13 03:47:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:47:01 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.c scrub starts
Dec 13 03:47:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 03:47:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 03:47:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:47:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Dec 13 03:47:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Dec 13 03:47:01 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 10.c scrub ok
Dec 13 03:47:02 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Dec 13 03:47:02 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Dec 13 03:47:02 compute-0 sudo[99788]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:02 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.e scrub starts
Dec 13 03:47:02 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.e scrub ok
Dec 13 03:47:02 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.f scrub starts
Dec 13 03:47:02 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.f scrub ok
Dec 13 03:47:02 compute-0 ceph-mon[75071]: 10.c scrub starts
Dec 13 03:47:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 03:47:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Dec 13 03:47:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:47:02 compute-0 ceph-mon[75071]: osdmap e88: 3 total, 3 up, 3 in
Dec 13 03:47:02 compute-0 ceph-mon[75071]: 10.c scrub ok
Dec 13 03:47:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 366 B/s wr, 30 op/s; 49 B/s, 1 objects/s recovering
Dec 13 03:47:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 13 03:47:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 03:47:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Dec 13 03:47:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 03:47:02 compute-0 sshd-session[99163]: Connection closed by 192.168.122.30 port 53724
Dec 13 03:47:02 compute-0 sshd-session[99158]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:47:02 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Dec 13 03:47:02 compute-0 systemd[1]: session-34.scope: Consumed 7.919s CPU time.
Dec 13 03:47:02 compute-0 systemd-logind[796]: Session 34 logged out. Waiting for processes to exit.
Dec 13 03:47:02 compute-0 systemd-logind[796]: Removed session 34.
Dec 13 03:47:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.17( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103159904s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.394149780s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.17( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103116035s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.394149780s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.19( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104602814s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395706177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.19( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104571342s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395706177s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.14( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104182243s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395507812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.14( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104125023s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395507812s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.15( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104076385s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395507812s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.15( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104011536s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395507812s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.17( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.12( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.104160309s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395538330s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.12( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103791237s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395538330s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.10( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103877068s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395690918s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.10( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103859901s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395690918s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103750229s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395706177s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103733063s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395706177s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103796005s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103779793s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395812988s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103545189s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395690918s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103529930s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395690918s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.11( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103304863s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395660400s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.d( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103443146s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.11( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103287697s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395660400s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.d( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103413582s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395812988s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.2( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103323936s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395828247s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.3( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103282928s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395812988s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.2( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103293419s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395828247s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.3( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103267670s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395812988s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.9( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103171349s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.395828247s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.9( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103135109s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.395828247s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.8( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103416443s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396194458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.8( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103395462s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396194458s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103705406s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396667480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1( v 74'2 (0'0,74'2] local-lis/les=86/87 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103681564s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396667480s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.4( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103011131s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396072388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.19( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.14( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.4( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102953911s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396072388s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.18( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102930069s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396255493s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.18( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102912903s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396255493s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1a( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102814674s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396224976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102796555s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396224976s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.10( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1b( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102780342s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396224976s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1a( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102783203s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396224976s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1c( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103083611s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396621704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1c( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.103057861s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396621704s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.6( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102423668s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396072388s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.6( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102405548s) [0] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396072388s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102831841s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396667480s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1f( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102807045s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396667480s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.f( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.e( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102252007s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 active pruub 153.396636963s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:02 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 88 pg[11.1e( v 74'2 (0'0,74'2] local-lis/les=86/87 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88 pruub=15.102207184s) [2] r=-1 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 unknown NOTIFY pruub 153.396636963s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.1( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.4( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:02 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 88 pg[11.6( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.11( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.1e( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.1f( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.1c( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.1a( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.1b( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.18( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.2( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.9( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.8( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.b( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.d( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.3( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.12( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 88 pg[11.15( empty local-lis/les=0/0 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Dec 13 03:47:03 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Dec 13 03:47:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 3.10 scrub starts
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 3.10 scrub ok
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 5.e scrub starts
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 5.e scrub ok
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 3.f scrub starts
Dec 13 03:47:03 compute-0 ceph-mon[75071]: 3.f scrub ok
Dec 13 03:47:03 compute-0 ceph-mon[75071]: pgmap v179: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 366 B/s wr, 30 op/s; 49 B/s, 1 objects/s recovering
Dec 13 03:47:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 03:47:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Dec 13 03:47:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 03:47:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 03:47:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Dec 13 03:47:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=89 pruub=12.066004753s) [2] r=-1 lpr=89 pi=[58,89)/1 crt=36'39 active pruub 157.446945190s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[6.f( v 36'39 (0'0,36'39] local-lis/les=58/59 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=89 pruub=12.065936089s) [2] r=-1 lpr=89 pi=[58,89)/1 crt=36'39 unknown NOTIFY pruub 157.446945190s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.15( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.12( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[6.f( empty local-lis/les=0/0 n=0 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=89) [2] r=0 lpr=89 pi=[58,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.10( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.14( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.3( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.d( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.b( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.8( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.9( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.18( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.1b( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.1a( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.6( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.f( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.1( v 74'2 lc 0'0 (0'0,74'2] local-lis/les=88/89 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.4( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.19( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.e( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 89 pg[11.17( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [0] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.2( v 74'2 (0'0,74'2] local-lis/les=88/89 n=1 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.1c( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.1e( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.11( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:03 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 89 pg[11.1f( v 74'2 (0'0,74'2] local-lis/les=88/89 n=0 ec=86/40 lis/c=86/86 les/c/f=87/87/0 sis=88) [2] r=0 lpr=88 pi=[86,88)/1 crt=74'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:04 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Dec 13 03:47:04 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Dec 13 03:47:04 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Dec 13 03:47:04 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Dec 13 03:47:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v181: 305 pgs: 10 peering, 295 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 0 objects/s recovering
Dec 13 03:47:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Dec 13 03:47:04 compute-0 ceph-mon[75071]: 7.16 scrub starts
Dec 13 03:47:04 compute-0 ceph-mon[75071]: 7.16 scrub ok
Dec 13 03:47:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 03:47:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Dec 13 03:47:04 compute-0 ceph-mon[75071]: osdmap e89: 3 total, 3 up, 3 in
Dec 13 03:47:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Dec 13 03:47:04 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Dec 13 03:47:04 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 90 pg[6.f( v 36'39 lc 35'1 (0'0,36'39] local-lis/les=89/90 n=1 ec=46/24 lis/c=58/58 les/c/f=59/59/0 sis=89) [2] r=0 lpr=89 pi=[58,89)/1 crt=36'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.d scrub starts
Dec 13 03:47:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.d scrub ok
Dec 13 03:47:05 compute-0 ceph-mon[75071]: 8.19 scrub starts
Dec 13 03:47:05 compute-0 ceph-mon[75071]: 8.19 scrub ok
Dec 13 03:47:05 compute-0 ceph-mon[75071]: 5.7 scrub starts
Dec 13 03:47:05 compute-0 ceph-mon[75071]: 5.7 scrub ok
Dec 13 03:47:05 compute-0 ceph-mon[75071]: pgmap v181: 305 pgs: 10 peering, 295 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 49 B/s, 0 objects/s recovering
Dec 13 03:47:05 compute-0 ceph-mon[75071]: osdmap e90: 3 total, 3 up, 3 in
Dec 13 03:47:06 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Dec 13 03:47:06 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Dec 13 03:47:06 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Dec 13 03:47:06 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Dec 13 03:47:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 10 peering, 295 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Dec 13 03:47:06 compute-0 ceph-mon[75071]: 5.d scrub starts
Dec 13 03:47:06 compute-0 ceph-mon[75071]: 5.d scrub ok
Dec 13 03:47:07 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Dec 13 03:47:07 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Dec 13 03:47:07 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Dec 13 03:47:07 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Dec 13 03:47:07 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Dec 13 03:47:07 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Dec 13 03:47:07 compute-0 ceph-mon[75071]: 5.1c scrub starts
Dec 13 03:47:07 compute-0 ceph-mon[75071]: 5.1c scrub ok
Dec 13 03:47:07 compute-0 ceph-mon[75071]: 7.18 scrub starts
Dec 13 03:47:07 compute-0 ceph-mon[75071]: 7.18 scrub ok
Dec 13 03:47:07 compute-0 ceph-mon[75071]: pgmap v183: 305 pgs: 10 peering, 295 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 0 objects/s recovering
Dec 13 03:47:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 0 objects/s recovering
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 3.13 scrub starts
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 3.13 scrub ok
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 5.1b scrub starts
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 5.1b scrub ok
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 5.4 scrub starts
Dec 13 03:47:08 compute-0 ceph-mon[75071]: 5.4 scrub ok
Dec 13 03:47:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Dec 13 03:47:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Dec 13 03:47:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Dec 13 03:47:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Dec 13 03:47:09 compute-0 ceph-mon[75071]: pgmap v184: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 0 objects/s recovering
Dec 13 03:47:10 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Dec 13 03:47:10 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Dec 13 03:47:10 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Dec 13 03:47:10 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Dec 13 03:47:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 0 objects/s recovering
Dec 13 03:47:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Dec 13 03:47:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 03:47:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Dec 13 03:47:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 03:47:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Dec 13 03:47:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Dec 13 03:47:10 compute-0 ceph-mon[75071]: 7.17 scrub starts
Dec 13 03:47:10 compute-0 ceph-mon[75071]: 7.17 scrub ok
Dec 13 03:47:10 compute-0 ceph-mon[75071]: 5.2 scrub starts
Dec 13 03:47:10 compute-0 ceph-mon[75071]: 5.2 scrub ok
Dec 13 03:47:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Dec 13 03:47:11 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Dec 13 03:47:11 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Dec 13 03:47:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Dec 13 03:47:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Dec 13 03:47:11 compute-0 ceph-mon[75071]: 2.1e scrub starts
Dec 13 03:47:11 compute-0 ceph-mon[75071]: 2.1e scrub ok
Dec 13 03:47:11 compute-0 ceph-mon[75071]: 5.5 scrub starts
Dec 13 03:47:11 compute-0 ceph-mon[75071]: 5.5 scrub ok
Dec 13 03:47:11 compute-0 ceph-mon[75071]: pgmap v185: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 0 objects/s recovering
Dec 13 03:47:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Dec 13 03:47:11 compute-0 ceph-mon[75071]: osdmap e91: 3 total, 3 up, 3 in
Dec 13 03:47:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Dec 13 03:47:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v187: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Dec 13 03:47:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Dec 13 03:47:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 03:47:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Dec 13 03:47:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 03:47:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Dec 13 03:47:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Dec 13 03:47:12 compute-0 ceph-mon[75071]: 7.10 scrub starts
Dec 13 03:47:12 compute-0 ceph-mon[75071]: 7.10 scrub ok
Dec 13 03:47:12 compute-0 ceph-mon[75071]: 4.18 scrub starts
Dec 13 03:47:12 compute-0 ceph-mon[75071]: 4.18 scrub ok
Dec 13 03:47:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Dec 13 03:47:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e92 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:13 compute-0 ceph-mon[75071]: 8.15 scrub starts
Dec 13 03:47:13 compute-0 ceph-mon[75071]: 8.15 scrub ok
Dec 13 03:47:13 compute-0 ceph-mon[75071]: pgmap v187: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Dec 13 03:47:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Dec 13 03:47:13 compute-0 ceph-mon[75071]: osdmap e92: 3 total, 3 up, 3 in
Dec 13 03:47:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Dec 13 03:47:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Dec 13 03:47:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Dec 13 03:47:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Dec 13 03:47:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Dec 13 03:47:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 03:47:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Dec 13 03:47:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Dec 13 03:47:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Dec 13 03:47:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 03:47:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Dec 13 03:47:14 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Dec 13 03:47:15 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Dec 13 03:47:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Dec 13 03:47:15 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Dec 13 03:47:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Dec 13 03:47:15 compute-0 ceph-mon[75071]: 7.9 scrub starts
Dec 13 03:47:15 compute-0 ceph-mon[75071]: 4.1a scrub starts
Dec 13 03:47:15 compute-0 ceph-mon[75071]: 7.9 scrub ok
Dec 13 03:47:15 compute-0 ceph-mon[75071]: pgmap v189: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 78 B/s, 0 objects/s recovering
Dec 13 03:47:15 compute-0 ceph-mon[75071]: 4.1a scrub ok
Dec 13 03:47:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Dec 13 03:47:15 compute-0 ceph-mon[75071]: osdmap e93: 3 total, 3 up, 3 in
Dec 13 03:47:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Dec 13 03:47:16 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 03:47:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Dec 13 03:47:16 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 03:47:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Dec 13 03:47:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Dec 13 03:47:16 compute-0 ceph-mon[75071]: 8.11 scrub starts
Dec 13 03:47:16 compute-0 ceph-mon[75071]: 10.4 scrub starts
Dec 13 03:47:16 compute-0 ceph-mon[75071]: 8.11 scrub ok
Dec 13 03:47:16 compute-0 ceph-mon[75071]: 10.4 scrub ok
Dec 13 03:47:16 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Dec 13 03:47:17 compute-0 sshd-session[99847]: Accepted publickey for zuul from 192.168.122.30 port 39900 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:47:17 compute-0 systemd-logind[796]: New session 35 of user zuul.
Dec 13 03:47:17 compute-0 systemd[1]: Started Session 35 of User zuul.
Dec 13 03:47:17 compute-0 sshd-session[99847]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:47:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:17 compute-0 ceph-mon[75071]: pgmap v191: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:17 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Dec 13 03:47:17 compute-0 ceph-mon[75071]: osdmap e94: 3 total, 3 up, 3 in
Dec 13 03:47:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Dec 13 03:47:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Dec 13 03:47:18 compute-0 python3.9[100000]: ansible-ansible.legacy.ping Invoked with data=pong
Dec 13 03:47:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Dec 13 03:47:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 03:47:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Dec 13 03:47:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Dec 13 03:47:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 03:47:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Dec 13 03:47:18 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 94 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=94 pruub=15.440384865s) [2] r=-1 lpr=94 pi=[56,94)/1 crt=70'1442 lcod 70'1442 active pruub 176.310714722s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:18 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 95 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=94 pruub=15.440299034s) [2] r=-1 lpr=94 pi=[56,94)/1 crt=70'1442 lcod 70'1442 unknown NOTIFY pruub 176.310714722s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Dec 13 03:47:18 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 95 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=94) [2] r=0 lpr=95 pi=[56,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:19 compute-0 python3.9[100174]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:47:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Dec 13 03:47:19 compute-0 ceph-mon[75071]: 3.14 scrub starts
Dec 13 03:47:19 compute-0 ceph-mon[75071]: 3.14 scrub ok
Dec 13 03:47:19 compute-0 ceph-mon[75071]: pgmap v193: 305 pgs: 305 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Dec 13 03:47:19 compute-0 ceph-mon[75071]: osdmap e95: 3 total, 3 up, 3 in
Dec 13 03:47:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Dec 13 03:47:19 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Dec 13 03:47:19 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 96 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=96) [2]/[0] r=0 lpr=96 pi=[56,96)/1 crt=70'1442 lcod 70'1442 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:19 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 96 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=96) [2]/[0] r=0 lpr=96 pi=[56,96)/1 crt=70'1442 lcod 70'1442 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:19 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:19 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 96 pg[9.13( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=96) [2]/[0] r=-1 lpr=96 pi=[56,96)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Dec 13 03:47:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Dec 13 03:47:20 compute-0 sudo[100328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnxklfhstkcrblpysgoctvyywoenpdau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597639.8346355-45-190345501657526/AnsiballZ_command.py'
Dec 13 03:47:20 compute-0 sudo[100328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:20 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Dec 13 03:47:20 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Dec 13 03:47:20 compute-0 python3.9[100330]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:47:20 compute-0 sudo[100328]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Dec 13 03:47:20 compute-0 ceph-mon[75071]: osdmap e96: 3 total, 3 up, 3 in
Dec 13 03:47:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Dec 13 03:47:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Dec 13 03:47:21 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 97 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=96/97 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=96) [2]/[0] async=[2] r=0 lpr=96 pi=[56,96)/1 crt=74'1443 lcod 70'1442 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:21 compute-0 sudo[100481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqcuimoytdnfajzgoexrimfjfmwsghgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597640.7269707-57-50801256404842/AnsiballZ_stat.py'
Dec 13 03:47:21 compute-0 sudo[100481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:21 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Dec 13 03:47:21 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Dec 13 03:47:21 compute-0 python3.9[100483]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:47:21 compute-0 sudo[100481]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Dec 13 03:47:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Dec 13 03:47:21 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Dec 13 03:47:21 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Dec 13 03:47:21 compute-0 sudo[100635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmjulyutqhodfsbkfnxfqtpdkxvwjltm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597641.6096396-68-195909284191735/AnsiballZ_file.py'
Dec 13 03:47:21 compute-0 sudo[100635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Dec 13 03:47:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Dec 13 03:47:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Dec 13 03:47:22 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 98 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=0/0 n=7 ec=48/36 lis/c=96/56 les/c/f=97/57/0 sis=98) [2] r=0 lpr=98 pi=[56,98)/1 pct=0'0 crt=74'1443 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:22 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 98 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=0/0 n=7 ec=48/36 lis/c=96/56 les/c/f=97/57/0 sis=98) [2] r=0 lpr=98 pi=[56,98)/1 crt=74'1443 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:22 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 98 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=96/97 n=7 ec=48/36 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.000804901s) [2] async=[2] r=-1 lpr=98 pi=[56,98)/1 crt=74'1443 lcod 70'1442 active pruub 178.906463623s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:22 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 98 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=96/97 n=7 ec=48/36 lis/c=96/56 les/c/f=97/57/0 sis=98 pruub=15.000633240s) [2] r=-1 lpr=98 pi=[56,98)/1 crt=74'1443 lcod 70'1442 unknown NOTIFY pruub 178.906463623s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 8.1e scrub starts
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 8.1e scrub ok
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 4.1b scrub starts
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 4.1b scrub ok
Dec 13 03:47:22 compute-0 ceph-mon[75071]: pgmap v196: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:22 compute-0 ceph-mon[75071]: osdmap e97: 3 total, 3 up, 3 in
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 3.1 scrub starts
Dec 13 03:47:22 compute-0 ceph-mon[75071]: 3.1 scrub ok
Dec 13 03:47:22 compute-0 python3.9[100637]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:47:22 compute-0 sudo[100635]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:22 compute-0 sudo[100787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttpjhewshygpeoecfmlsvoajkxfodhsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597642.3760362-77-177590735915034/AnsiballZ_file.py'
Dec 13 03:47:22 compute-0 sudo[100787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:22 compute-0 python3.9[100789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:47:22 compute-0 sudo[100787]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Dec 13 03:47:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Dec 13 03:47:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Dec 13 03:47:23 compute-0 ceph-mon[75071]: 7.12 scrub starts
Dec 13 03:47:23 compute-0 ceph-mon[75071]: 7.12 scrub ok
Dec 13 03:47:23 compute-0 ceph-mon[75071]: 3.8 scrub starts
Dec 13 03:47:23 compute-0 ceph-mon[75071]: 3.8 scrub ok
Dec 13 03:47:23 compute-0 ceph-mon[75071]: osdmap e98: 3 total, 3 up, 3 in
Dec 13 03:47:23 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 99 pg[9.13( v 74'1443 (0'0,74'1443] local-lis/les=98/99 n=7 ec=48/36 lis/c=96/56 les/c/f=97/57/0 sis=98) [2] r=0 lpr=98 pi=[56,98)/1 crt=74'1443 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:23 compute-0 python3.9[100939]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:47:23 compute-0 network[100956]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:47:23 compute-0 network[100957]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:47:23 compute-0 network[100958]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:47:24 compute-0 ceph-mon[75071]: pgmap v199: 305 pgs: 1 unknown, 304 active+clean; 460 KiB data, 135 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:24 compute-0 ceph-mon[75071]: osdmap e99: 3 total, 3 up, 3 in
Dec 13 03:47:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 455 B/s wr, 28 op/s; 116 B/s, 2 objects/s recovering
Dec 13 03:47:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Dec 13 03:47:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 03:47:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Dec 13 03:47:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Dec 13 03:47:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 03:47:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Dec 13 03:47:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Dec 13 03:47:25 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 100 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=100 pruub=9.368121147s) [1] r=-1 lpr=100 pi=[56,100)/1 crt=43'1441 active pruub 176.310806274s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:25 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 100 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=100 pruub=9.367987633s) [1] r=-1 lpr=100 pi=[56,100)/1 crt=43'1441 unknown NOTIFY pruub 176.310806274s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:25 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 100 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=100) [1] r=0 lpr=100 pi=[56,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Dec 13 03:47:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Dec 13 03:47:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Dec 13 03:47:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Dec 13 03:47:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Dec 13 03:47:26 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 101 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=101) [1]/[0] r=0 lpr=101 pi=[56,101)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:26 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 101 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=101) [1]/[0] r=0 lpr=101 pi=[56,101)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[56,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:26 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 101 pg[9.15( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=101) [1]/[0] r=-1 lpr=101 pi=[56,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:26 compute-0 ceph-mon[75071]: pgmap v201: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 455 B/s wr, 28 op/s; 116 B/s, 2 objects/s recovering
Dec 13 03:47:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Dec 13 03:47:26 compute-0 ceph-mon[75071]: osdmap e100: 3 total, 3 up, 3 in
Dec 13 03:47:26 compute-0 ceph-mon[75071]: 7.6 scrub starts
Dec 13 03:47:26 compute-0 ceph-mon[75071]: 7.6 scrub ok
Dec 13 03:47:26 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Dec 13 03:47:26 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Dec 13 03:47:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 457 B/s wr, 28 op/s; 117 B/s, 2 objects/s recovering
Dec 13 03:47:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Dec 13 03:47:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 03:47:26 compute-0 python3.9[101218]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:47:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Dec 13 03:47:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 03:47:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 102 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=102 pruub=12.675521851s) [0] r=-1 lpr=102 pi=[67,102)/1 crt=43'1441 active pruub 169.519699097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 102 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=102 pruub=12.675471306s) [0] r=-1 lpr=102 pi=[67,102)/1 crt=43'1441 unknown NOTIFY pruub 169.519699097s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 102 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=102) [0] r=0 lpr=102 pi=[67,102)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:27 compute-0 ceph-mon[75071]: osdmap e101: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-mon[75071]: 8.9 scrub starts
Dec 13 03:47:27 compute-0 ceph-mon[75071]: 8.9 scrub ok
Dec 13 03:47:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Dec 13 03:47:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Dec 13 03:47:27 compute-0 ceph-mon[75071]: osdmap e102: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 102 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=101/102 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=101) [1]/[0] async=[1] r=0 lpr=101 pi=[56,101)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:27 compute-0 python3.9[101368]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:47:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Dec 13 03:47:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[67,103)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 103 pg[9.16( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=103) [0]/[2] r=-1 lpr=103 pi=[67,103)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 103 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=101/102 n=7 ec=48/36 lis/c=101/56 les/c/f=102/57/0 sis=103 pruub=15.199418068s) [1] async=[1] r=-1 lpr=103 pi=[56,103)/1 crt=43'1441 active pruub 185.038467407s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:27 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 103 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=101/102 n=7 ec=48/36 lis/c=101/56 les/c/f=102/57/0 sis=103 pruub=15.199347496s) [1] r=-1 lpr=103 pi=[56,103)/1 crt=43'1441 unknown NOTIFY pruub 185.038467407s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 103 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=103) [0]/[2] r=0 lpr=103 pi=[67,103)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:27 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 103 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=103) [0]/[2] r=0 lpr=103 pi=[67,103)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:27 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 103 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=101/56 les/c/f=102/57/0 sis=103) [1] r=0 lpr=103 pi=[56,103)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:27 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 103 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=101/56 les/c/f=102/57/0 sis=103) [1] r=0 lpr=103 pi=[56,103)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:28 compute-0 ceph-mon[75071]: pgmap v204: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 457 B/s wr, 28 op/s; 117 B/s, 2 objects/s recovering
Dec 13 03:47:28 compute-0 ceph-mon[75071]: osdmap e103: 3 total, 3 up, 3 in
Dec 13 03:47:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Dec 13 03:47:28 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 03:47:28 compute-0 python3.9[101522]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:47:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Dec 13 03:47:28 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 03:47:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Dec 13 03:47:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Dec 13 03:47:28 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 104 pg[9.15( v 43'1441 (0'0,43'1441] local-lis/les=103/104 n=7 ec=48/36 lis/c=101/56 les/c/f=102/57/0 sis=103) [1] r=0 lpr=103 pi=[56,103)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:29 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Dec 13 03:47:29 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Dec 13 03:47:29 compute-0 ceph-mon[75071]: osdmap e104: 3 total, 3 up, 3 in
Dec 13 03:47:29 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Dec 13 03:47:29 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Dec 13 03:47:29 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 104 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=103/104 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=103) [0]/[2] async=[0] r=0 lpr=103 pi=[67,103)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:29 compute-0 sudo[101678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-parksdcnxliwvpowtdhevslavilnfadn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597649.2862458-125-137363833556973/AnsiballZ_setup.py'
Dec 13 03:47:29 compute-0 sudo[101678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:29 compute-0 python3.9[101680]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:47:30 compute-0 sudo[101678]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Dec 13 03:47:30 compute-0 ceph-mon[75071]: pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Dec 13 03:47:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Dec 13 03:47:30 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 105 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=103/104 n=7 ec=48/36 lis/c=103/67 les/c/f=104/68/0 sis=105 pruub=15.210860252s) [0] async=[0] r=-1 lpr=105 pi=[67,105)/1 crt=43'1441 active pruub 175.138153076s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:30 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 105 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=103/104 n=7 ec=48/36 lis/c=103/67 les/c/f=104/68/0 sis=105 pruub=15.210775375s) [0] r=-1 lpr=105 pi=[67,105)/1 crt=43'1441 unknown NOTIFY pruub 175.138153076s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:30 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 105 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=103/67 les/c/f=104/68/0 sis=105) [0] r=0 lpr=105 pi=[67,105)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:30 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 105 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=103/67 les/c/f=104/68/0 sis=105) [0] r=0 lpr=105 pi=[67,105)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v210: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 1 peering, 302 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/279 objects misplaced (2.151%); 27 B/s, 1 objects/s recovering
Dec 13 03:47:30 compute-0 sudo[101762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nploodrhkcumszojmzszzveooicoyvst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597649.2862458-125-137363833556973/AnsiballZ_dnf.py'
Dec 13 03:47:30 compute-0 sudo[101762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:47:30 compute-0 python3.9[101764]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:47:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Dec 13 03:47:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Dec 13 03:47:31 compute-0 ceph-mon[75071]: 7.1a scrub starts
Dec 13 03:47:31 compute-0 ceph-mon[75071]: 7.1a scrub ok
Dec 13 03:47:31 compute-0 ceph-mon[75071]: osdmap e105: 3 total, 3 up, 3 in
Dec 13 03:47:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Dec 13 03:47:31 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 106 pg[9.16( v 43'1441 (0'0,43'1441] local-lis/les=105/106 n=7 ec=48/36 lis/c=103/67 les/c/f=104/68/0 sis=105) [0] r=0 lpr=105 pi=[67,105)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Dec 13 03:47:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Dec 13 03:47:31 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Dec 13 03:47:31 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Dec 13 03:47:31 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.f scrub starts
Dec 13 03:47:31 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.f scrub ok
Dec 13 03:47:32 compute-0 ceph-mon[75071]: pgmap v210: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 1 peering, 302 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/279 objects misplaced (2.151%); 27 B/s, 1 objects/s recovering
Dec 13 03:47:32 compute-0 ceph-mon[75071]: osdmap e106: 3 total, 3 up, 3 in
Dec 13 03:47:32 compute-0 ceph-mon[75071]: 10.2 scrub starts
Dec 13 03:47:32 compute-0 ceph-mon[75071]: 10.2 scrub ok
Dec 13 03:47:32 compute-0 ceph-mon[75071]: 2.f scrub starts
Dec 13 03:47:32 compute-0 ceph-mon[75071]: 2.f scrub ok
Dec 13 03:47:32 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Dec 13 03:47:32 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Dec 13 03:47:32 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.c scrub starts
Dec 13 03:47:32 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.c scrub ok
Dec 13 03:47:32 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Dec 13 03:47:32 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Dec 13 03:47:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 1 peering, 302 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/279 objects misplaced (2.151%); 24 B/s, 1 objects/s recovering
Dec 13 03:47:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:33 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.a scrub starts
Dec 13 03:47:33 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.a scrub ok
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 3.7 scrub starts
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 3.7 scrub ok
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 2.9 scrub starts
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 2.9 scrub ok
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 10.7 scrub starts
Dec 13 03:47:33 compute-0 ceph-mon[75071]: 10.7 scrub ok
Dec 13 03:47:33 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Dec 13 03:47:33 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Dec 13 03:47:34 compute-0 ceph-mon[75071]: 7.c scrub starts
Dec 13 03:47:34 compute-0 ceph-mon[75071]: 7.c scrub ok
Dec 13 03:47:34 compute-0 ceph-mon[75071]: pgmap v212: 305 pgs: 1 activating+remapped, 1 active+clean+scrubbing, 1 peering, 302 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 6/279 objects misplaced (2.151%); 24 B/s, 1 objects/s recovering
Dec 13 03:47:34 compute-0 ceph-mon[75071]: 2.a scrub starts
Dec 13 03:47:34 compute-0 ceph-mon[75071]: 2.a scrub ok
Dec 13 03:47:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Dec 13 03:47:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Dec 13 03:47:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Dec 13 03:47:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Dec 13 03:47:34 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 03:47:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Dec 13 03:47:35 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 03:47:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Dec 13 03:47:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Dec 13 03:47:35 compute-0 ceph-mon[75071]: 7.1 scrub starts
Dec 13 03:47:35 compute-0 ceph-mon[75071]: 7.1 scrub ok
Dec 13 03:47:35 compute-0 ceph-mon[75071]: 3.5 scrub starts
Dec 13 03:47:35 compute-0 ceph-mon[75071]: 3.5 scrub ok
Dec 13 03:47:35 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Dec 13 03:47:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Dec 13 03:47:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Dec 13 03:47:36 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Dec 13 03:47:36 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Dec 13 03:47:36 compute-0 ceph-mon[75071]: pgmap v213: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 1 objects/s recovering
Dec 13 03:47:36 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Dec 13 03:47:36 compute-0 ceph-mon[75071]: osdmap e107: 3 total, 3 up, 3 in
Dec 13 03:47:36 compute-0 ceph-mon[75071]: 2.8 scrub starts
Dec 13 03:47:36 compute-0 ceph-mon[75071]: 2.8 scrub ok
Dec 13 03:47:36 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Dec 13 03:47:36 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Dec 13 03:47:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Dec 13 03:47:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Dec 13 03:47:36 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 03:47:37 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Dec 13 03:47:37 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Dec 13 03:47:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Dec 13 03:47:37 compute-0 ceph-mon[75071]: 5.1 scrub starts
Dec 13 03:47:37 compute-0 ceph-mon[75071]: 5.1 scrub ok
Dec 13 03:47:37 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Dec 13 03:47:37 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 03:47:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Dec 13 03:47:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Dec 13 03:47:37 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 108 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=108 pruub=13.188622475s) [2] r=-1 lpr=108 pi=[56,108)/1 crt=74'1444 lcod 74'1444 active pruub 192.311096191s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:37 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 108 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=108 pruub=13.188378334s) [2] r=-1 lpr=108 pi=[56,108)/1 crt=74'1444 lcod 74'1444 unknown NOTIFY pruub 192.311096191s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:37 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=108) [2] r=0 lpr=108 pi=[56,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:37 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.d scrub starts
Dec 13 03:47:37 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.d scrub ok
Dec 13 03:47:37 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Dec 13 03:47:37 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Dec 13 03:47:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Dec 13 03:47:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Dec 13 03:47:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Dec 13 03:47:37 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[56,109)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:37 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 109 pg[9.19( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=109) [2]/[0] r=-1 lpr=109 pi=[56,109)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:37 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 109 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=109) [2]/[0] r=0 lpr=109 pi=[56,109)/1 crt=74'1444 lcod 74'1444 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:37 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 109 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=56/57 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=109) [2]/[0] r=0 lpr=109 pi=[56,109)/1 crt=74'1444 lcod 74'1444 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 8.2 scrub starts
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 8.2 scrub ok
Dec 13 03:47:38 compute-0 ceph-mon[75071]: pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 0 objects/s recovering
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 10.13 scrub starts
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 10.13 scrub ok
Dec 13 03:47:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Dec 13 03:47:38 compute-0 ceph-mon[75071]: osdmap e108: 3 total, 3 up, 3 in
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 3.3 scrub starts
Dec 13 03:47:38 compute-0 ceph-mon[75071]: 3.3 scrub ok
Dec 13 03:47:38 compute-0 ceph-mon[75071]: osdmap e109: 3 total, 3 up, 3 in
Dec 13 03:47:38 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Dec 13 03:47:38 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Dec 13 03:47:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v218: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 03:47:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Dec 13 03:47:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Dec 13 03:47:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Dec 13 03:47:39 compute-0 ceph-mon[75071]: 8.d scrub starts
Dec 13 03:47:39 compute-0 ceph-mon[75071]: 8.d scrub ok
Dec 13 03:47:39 compute-0 ceph-mon[75071]: 7.2 scrub starts
Dec 13 03:47:39 compute-0 ceph-mon[75071]: osdmap e110: 3 total, 3 up, 3 in
Dec 13 03:47:39 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Dec 13 03:47:39 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Dec 13 03:47:39 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Dec 13 03:47:39 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 110 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=109/110 n=7 ec=48/36 lis/c=56/56 les/c/f=57/57/0 sis=109) [2]/[0] async=[2] r=0 lpr=109 pi=[56,109)/1 crt=74'1445 lcod 74'1444 mlcod 0'0 active+remapped mbc={255={(0+1)=13}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:39 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Dec 13 03:47:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Dec 13 03:47:40 compute-0 ceph-mon[75071]: 7.2 scrub ok
Dec 13 03:47:40 compute-0 ceph-mon[75071]: pgmap v218: 305 pgs: 1 remapped+peering, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Dec 13 03:47:40 compute-0 ceph-mon[75071]: 5.1a scrub starts
Dec 13 03:47:40 compute-0 ceph-mon[75071]: 5.1a scrub ok
Dec 13 03:47:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Dec 13 03:47:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Dec 13 03:47:40 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 111 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=109/110 n=7 ec=48/36 lis/c=109/56 les/c/f=110/57/0 sis=111 pruub=15.015416145s) [2] async=[2] r=-1 lpr=111 pi=[56,111)/1 crt=74'1445 lcod 74'1444 active pruub 197.175460815s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:40 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 111 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=109/110 n=7 ec=48/36 lis/c=109/56 les/c/f=110/57/0 sis=111 pruub=15.015098572s) [2] r=-1 lpr=111 pi=[56,111)/1 crt=74'1445 lcod 74'1444 unknown NOTIFY pruub 197.175460815s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:40 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 111 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=109/56 les/c/f=110/57/0 sis=111) [2] r=0 lpr=111 pi=[56,111)/1 pct=0'0 crt=74'1445 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:40 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 111 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=109/56 les/c/f=110/57/0 sis=111) [2] r=0 lpr=111 pi=[56,111)/1 crt=74'1445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Dec 13 03:47:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Dec 13 03:47:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Dec 13 03:47:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Dec 13 03:47:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:47:40
Dec 13 03:47:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:47:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Dec 13 03:47:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Dec 13 03:47:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Dec 13 03:47:41 compute-0 ceph-mon[75071]: 4.1 scrub starts
Dec 13 03:47:41 compute-0 ceph-mon[75071]: 4.1 scrub ok
Dec 13 03:47:41 compute-0 ceph-mon[75071]: osdmap e111: 3 total, 3 up, 3 in
Dec 13 03:47:41 compute-0 ceph-mon[75071]: 3.6 scrub starts
Dec 13 03:47:41 compute-0 ceph-mon[75071]: 3.6 scrub ok
Dec 13 03:47:41 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 112 pg[9.19( v 74'1445 (0'0,74'1445] local-lis/les=111/112 n=7 ec=48/36 lis/c=109/56 les/c/f=110/57/0 sis=111) [2] r=0 lpr=111 pi=[56,111)/1 crt=74'1445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Dec 13 03:47:41 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.e scrub starts
Dec 13 03:47:41 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.e scrub ok
Dec 13 03:47:42 compute-0 ceph-mon[75071]: 7.5 scrub starts
Dec 13 03:47:42 compute-0 ceph-mon[75071]: 7.5 scrub ok
Dec 13 03:47:42 compute-0 ceph-mon[75071]: pgmap v221: 305 pgs: 1 remapped+peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:42 compute-0 ceph-mon[75071]: osdmap e112: 3 total, 3 up, 3 in
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:47:42 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.e scrub starts
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:47:42 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.e scrub ok
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:47:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 remapped+peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:43 compute-0 ceph-mon[75071]: 4.e scrub starts
Dec 13 03:47:43 compute-0 ceph-mon[75071]: 4.e scrub ok
Dec 13 03:47:43 compute-0 ceph-mon[75071]: 7.e scrub starts
Dec 13 03:47:43 compute-0 ceph-mon[75071]: 7.e scrub ok
Dec 13 03:47:44 compute-0 ceph-mon[75071]: pgmap v223: 305 pgs: 1 remapped+peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v224: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 03:47:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Dec 13 03:47:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 03:47:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Dec 13 03:47:45 compute-0 ceph-mon[75071]: pgmap v224: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 69 B/s, 2 objects/s recovering
Dec 13 03:47:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Dec 13 03:47:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 03:47:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Dec 13 03:47:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Dec 13 03:47:45 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Dec 13 03:47:45 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Dec 13 03:47:46 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Dec 13 03:47:46 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Dec 13 03:47:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Dec 13 03:47:46 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Dec 13 03:47:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Dec 13 03:47:46 compute-0 ceph-mon[75071]: osdmap e113: 3 total, 3 up, 3 in
Dec 13 03:47:46 compute-0 ceph-mon[75071]: 7.3 scrub starts
Dec 13 03:47:46 compute-0 ceph-mon[75071]: 7.3 scrub ok
Dec 13 03:47:46 compute-0 ceph-mon[75071]: 7.8 scrub starts
Dec 13 03:47:46 compute-0 ceph-mon[75071]: 7.8 scrub ok
Dec 13 03:47:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 2 objects/s recovering
Dec 13 03:47:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Dec 13 03:47:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 03:47:47 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Dec 13 03:47:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Dec 13 03:47:47 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Dec 13 03:47:47 compute-0 ceph-mon[75071]: 5.19 scrub starts
Dec 13 03:47:47 compute-0 ceph-mon[75071]: 5.19 scrub ok
Dec 13 03:47:47 compute-0 ceph-mon[75071]: pgmap v226: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 66 B/s, 2 objects/s recovering
Dec 13 03:47:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Dec 13 03:47:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 03:47:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Dec 13 03:47:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Dec 13 03:47:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e114 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Dec 13 03:47:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Dec 13 03:47:48 compute-0 ceph-mon[75071]: 2.6 scrub starts
Dec 13 03:47:48 compute-0 ceph-mon[75071]: 2.6 scrub ok
Dec 13 03:47:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Dec 13 03:47:48 compute-0 ceph-mon[75071]: osdmap e114: 3 total, 3 up, 3 in
Dec 13 03:47:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.e scrub starts
Dec 13 03:47:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.e scrub ok
Dec 13 03:47:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 1 objects/s recovering
Dec 13 03:47:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Dec 13 03:47:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 03:47:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Dec 13 03:47:49 compute-0 ceph-mon[75071]: 5.18 scrub starts
Dec 13 03:47:49 compute-0 ceph-mon[75071]: 5.18 scrub ok
Dec 13 03:47:49 compute-0 ceph-mon[75071]: 8.e scrub starts
Dec 13 03:47:49 compute-0 ceph-mon[75071]: 8.e scrub ok
Dec 13 03:47:49 compute-0 ceph-mon[75071]: pgmap v228: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 57 B/s, 1 objects/s recovering
Dec 13 03:47:49 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Dec 13 03:47:49 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 03:47:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Dec 13 03:47:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Dec 13 03:47:49 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 115 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=85/86 n=7 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=115 pruub=15.354882240s) [0] r=-1 lpr=115 pi=[85,115)/1 crt=74'1445 active pruub 194.864974976s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:49 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 115 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=85/86 n=7 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=115 pruub=15.354720116s) [0] r=-1 lpr=115 pi=[85,115)/1 crt=74'1445 unknown NOTIFY pruub 194.864974976s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:49 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 115 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=115) [0] r=0 lpr=115 pi=[85,115)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:50 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Dec 13 03:47:50 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Dec 13 03:47:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Dec 13 03:47:50 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Dec 13 03:47:50 compute-0 ceph-mon[75071]: osdmap e115: 3 total, 3 up, 3 in
Dec 13 03:47:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Dec 13 03:47:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Dec 13 03:47:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 116 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=85/86 n=7 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=116) [0]/[2] r=0 lpr=116 pi=[85,116)/1 crt=74'1445 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:50 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 116 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=85/86 n=7 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=116) [0]/[2] r=0 lpr=116 pi=[85,116)/1 crt=74'1445 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 116 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[85,116)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:50 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 116 pg[9.1c( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=116) [0]/[2] r=-1 lpr=116 pi=[85,116)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Dec 13 03:47:50 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 03:47:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Dec 13 03:47:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Dec 13 03:47:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.e scrub starts
Dec 13 03:47:51 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.e scrub ok
Dec 13 03:47:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Dec 13 03:47:51 compute-0 ceph-mon[75071]: 5.1d scrub starts
Dec 13 03:47:51 compute-0 ceph-mon[75071]: 5.1d scrub ok
Dec 13 03:47:51 compute-0 ceph-mon[75071]: osdmap e116: 3 total, 3 up, 3 in
Dec 13 03:47:51 compute-0 ceph-mon[75071]: pgmap v231: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:51 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Dec 13 03:47:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 03:47:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Dec 13 03:47:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Dec 13 03:47:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.b scrub starts
Dec 13 03:47:51 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 117 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=116/117 n=7 ec=48/36 lis/c=85/85 les/c/f=86/86/0 sis=116) [0]/[2] async=[0] r=0 lpr=116 pi=[85,116)/1 crt=74'1445 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.b scrub ok
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4088850449233376e-06 of space, bias 4.0, pg target 0.0016906620539080051 quantized to 16 (current 16)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:47:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Dec 13 03:47:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Dec 13 03:47:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Dec 13 03:47:52 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 118 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=116/85 les/c/f=117/86/0 sis=118) [0] r=0 lpr=118 pi=[85,118)/1 pct=0'0 crt=74'1445 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:52 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 118 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=0/0 n=7 ec=48/36 lis/c=116/85 les/c/f=117/86/0 sis=118) [0] r=0 lpr=118 pi=[85,118)/1 crt=74'1445 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 118 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=116/117 n=7 ec=48/36 lis/c=116/85 les/c/f=117/86/0 sis=118 pruub=15.141736031s) [0] async=[0] r=-1 lpr=118 pi=[85,118)/1 crt=74'1445 active pruub 197.303207397s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:52 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 118 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=116/117 n=7 ec=48/36 lis/c=116/85 les/c/f=117/86/0 sis=118 pruub=15.141664505s) [0] r=-1 lpr=118 pi=[85,118)/1 crt=74'1445 unknown NOTIFY pruub 197.303207397s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 10.11 scrub starts
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 10.11 scrub ok
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 3.e scrub starts
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 3.e scrub ok
Dec 13 03:47:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Dec 13 03:47:52 compute-0 ceph-mon[75071]: osdmap e117: 3 total, 3 up, 3 in
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 2.b scrub starts
Dec 13 03:47:52 compute-0 ceph-mon[75071]: 2.b scrub ok
Dec 13 03:47:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Dec 13 03:47:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 03:47:52 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.c scrub starts
Dec 13 03:47:52 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.c scrub ok
Dec 13 03:47:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:53 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.a scrub starts
Dec 13 03:47:53 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.a scrub ok
Dec 13 03:47:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Dec 13 03:47:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 03:47:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Dec 13 03:47:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Dec 13 03:47:53 compute-0 ceph-mon[75071]: osdmap e118: 3 total, 3 up, 3 in
Dec 13 03:47:53 compute-0 ceph-mon[75071]: pgmap v234: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:47:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Dec 13 03:47:53 compute-0 ceph-mon[75071]: 8.c scrub starts
Dec 13 03:47:53 compute-0 ceph-mon[75071]: 8.c scrub ok
Dec 13 03:47:53 compute-0 ceph-mon[75071]: 4.a scrub starts
Dec 13 03:47:53 compute-0 ceph-mon[75071]: 4.a scrub ok
Dec 13 03:47:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Dec 13 03:47:53 compute-0 ceph-mon[75071]: osdmap e119: 3 total, 3 up, 3 in
Dec 13 03:47:53 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 119 pg[9.1c( v 74'1445 (0'0,74'1445] local-lis/les=118/119 n=7 ec=48/36 lis/c=116/85 les/c/f=117/86/0 sis=118) [0] r=0 lpr=118 pi=[85,118)/1 crt=74'1445 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 119 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=119 pruub=9.490581512s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=70'1442 lcod 70'1442 active pruub 193.518218994s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 119 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=119 pruub=9.490454674s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=70'1442 lcod 70'1442 unknown NOTIFY pruub 193.518218994s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:54 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 119 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Dec 13 03:47:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Dec 13 03:47:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Dec 13 03:47:54 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 120 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[67,120)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:54 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 120 pg[9.1e( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=120) [0]/[2] r=-1 lpr=120 pi=[67,120)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 120 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=120) [0]/[2] r=0 lpr=120 pi=[67,120)/1 crt=70'1442 lcod 70'1442 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:54 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 120 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=67/68 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=120) [0]/[2] r=0 lpr=120 pi=[67,120)/1 crt=70'1442 lcod 70'1442 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:54 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.a scrub starts
Dec 13 03:47:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Dec 13 03:47:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Dec 13 03:47:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:47:54 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.a scrub ok
Dec 13 03:47:54 compute-0 sudo[101891]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:47:54 compute-0 sudo[101891]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:54 compute-0 sudo[101891]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:54 compute-0 sudo[101916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:47:54 compute-0 sudo[101916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:55 compute-0 sudo[101916]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Dec 13 03:47:55 compute-0 ceph-mon[75071]: osdmap e120: 3 total, 3 up, 3 in
Dec 13 03:47:55 compute-0 ceph-mon[75071]: 3.a scrub starts
Dec 13 03:47:55 compute-0 ceph-mon[75071]: pgmap v237: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Dec 13 03:47:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Dec 13 03:47:55 compute-0 ceph-mon[75071]: 3.a scrub ok
Dec 13 03:47:55 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 121 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=121 pruub=8.245008469s) [1] r=-1 lpr=121 pi=[72,121)/1 crt=43'1441 active pruub 193.481155396s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:55 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 121 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=121 pruub=8.244970322s) [1] r=-1 lpr=121 pi=[72,121)/1 crt=43'1441 unknown NOTIFY pruub 193.481155396s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:55 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 121 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=121) [1] r=0 lpr=121 pi=[72,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Dec 13 03:47:55 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 121 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=120/121 n=7 ec=48/36 lis/c=67/67 les/c/f=68/68/0 sis=120) [0]/[2] async=[0] r=0 lpr=120 pi=[67,120)/1 crt=74'1443 lcod 70'1442 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:47:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:47:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:55 compute-0 sudo[101972]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:47:55 compute-0 sudo[101972]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:55 compute-0 sudo[101972]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:55 compute-0 sudo[101997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:47:55 compute-0 sudo[101997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.842280617 +0000 UTC m=+0.036378584 container create 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:47:55 compute-0 systemd[1]: Started libpod-conmon-5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af.scope.
Dec 13 03:47:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.918684125 +0000 UTC m=+0.112782112 container init 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.824805996 +0000 UTC m=+0.018903993 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.925391969 +0000 UTC m=+0.119489936 container start 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.929128882 +0000 UTC m=+0.123226849 container attach 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:47:55 compute-0 condescending_mclaren[102050]: 167 167
Dec 13 03:47:55 compute-0 systemd[1]: libpod-5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af.scope: Deactivated successfully.
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.932836135 +0000 UTC m=+0.126934112 container died 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:47:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-41c15895899a1067e65fddc522004646861f22c0444e8b7298d4639947dcf6cf-merged.mount: Deactivated successfully.
Dec 13 03:47:55 compute-0 podman[102034]: 2025-12-13 03:47:55.974618717 +0000 UTC m=+0.168716684 container remove 5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:47:55 compute-0 systemd[1]: libpod-conmon-5186cc19ff8cc70394785d15881d0475d9e9788e24543bfc89822a5472c417af.scope: Deactivated successfully.
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.107594893 +0000 UTC m=+0.040176478 container create b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:47:56 compute-0 systemd[1]: Started libpod-conmon-b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a.scope.
Dec 13 03:47:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.087628923 +0000 UTC m=+0.020210528 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.203758856 +0000 UTC m=+0.136340471 container init b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.210333607 +0000 UTC m=+0.142915192 container start b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.21410396 +0000 UTC m=+0.146685575 container attach b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 03:47:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Dec 13 03:47:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Dec 13 03:47:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Dec 13 03:47:56 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[72,122)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:56 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 122 pg[9.1f( empty local-lis/les=0/0 n=0 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=122) [1]/[2] r=-1 lpr=122 pi=[72,122)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:56 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 122 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=122) [1]/[2] r=0 lpr=122 pi=[72,122)/1 crt=43'1441 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:56 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 122 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=72/73 n=7 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=122) [1]/[2] r=0 lpr=122 pi=[72,122)/1 crt=43'1441 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:56 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 122 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=120/121 n=7 ec=48/36 lis/c=120/67 les/c/f=121/68/0 sis=122 pruub=14.994735718s) [0] async=[0] r=-1 lpr=122 pi=[67,122)/1 crt=74'1443 lcod 70'1442 active pruub 201.241607666s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:56 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 122 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=120/121 n=7 ec=48/36 lis/c=120/67 les/c/f=121/68/0 sis=122 pruub=14.994578362s) [0] r=-1 lpr=122 pi=[67,122)/1 crt=74'1443 lcod 70'1442 unknown NOTIFY pruub 201.241607666s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Dec 13 03:47:56 compute-0 ceph-mon[75071]: osdmap e121: 3 total, 3 up, 3 in
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:47:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:47:56 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 122 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=0/0 n=7 ec=48/36 lis/c=120/67 les/c/f=121/68/0 sis=122) [0] r=0 lpr=122 pi=[67,122)/1 pct=0'0 crt=74'1443 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:56 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 122 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=0/0 n=7 ec=48/36 lis/c=120/67 les/c/f=121/68/0 sis=122) [0] r=0 lpr=122 pi=[67,122)/1 crt=74'1443 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Dec 13 03:47:56 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Dec 13 03:47:56 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Dec 13 03:47:56 compute-0 cranky_yonath[102093]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:47:56 compute-0 cranky_yonath[102093]: --> All data devices are unavailable
Dec 13 03:47:56 compute-0 systemd[1]: libpod-b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a.scope: Deactivated successfully.
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.665104516 +0000 UTC m=+0.597686101 container died b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:47:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8272b3e1585b8f819b99afc99d8498845a5932f1ab57eb2667158682a2d9827-merged.mount: Deactivated successfully.
Dec 13 03:47:56 compute-0 podman[102077]: 2025-12-13 03:47:56.701259783 +0000 UTC m=+0.633841368 container remove b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:47:56 compute-0 systemd[1]: libpod-conmon-b61c006b511851fd3d90cf5e2ffb45f6006b8aee0778ea46585cbfde9dd8b97a.scope: Deactivated successfully.
Dec 13 03:47:56 compute-0 sudo[101997]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:56 compute-0 sudo[102135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:47:56 compute-0 sudo[102135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:56 compute-0 sudo[102135]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:56 compute-0 sudo[102166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:47:56 compute-0 sudo[102166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.102403194 +0000 UTC m=+0.039351036 container create 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:47:57 compute-0 systemd[1]: Started libpod-conmon-1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f.scope.
Dec 13 03:47:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.179272734 +0000 UTC m=+0.116220586 container init 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.083119482 +0000 UTC m=+0.020067354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.186111243 +0000 UTC m=+0.123059085 container start 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.189213528 +0000 UTC m=+0.126161710 container attach 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:47:57 compute-0 amazing_ishizaka[102219]: 167 167
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.191702017 +0000 UTC m=+0.128649859 container died 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 03:47:57 compute-0 systemd[1]: libpod-1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f.scope: Deactivated successfully.
Dec 13 03:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce951326b54e93acd7a5e55869c34a79d1da0bc69534fc46a626aa8245a52ba2-merged.mount: Deactivated successfully.
Dec 13 03:47:57 compute-0 podman[102203]: 2025-12-13 03:47:57.224316126 +0000 UTC m=+0.161263968 container remove 1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:57 compute-0 systemd[1]: libpod-conmon-1af93231a6ce9dc20b3ce6852e9b8ff7ece33c7bd3ea8c0582aeec1d61bc8d1f.scope: Deactivated successfully.
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.367173436 +0000 UTC m=+0.041433334 container create eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:47:57 compute-0 systemd[1]: Started libpod-conmon-eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee.scope.
Dec 13 03:47:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/604ed3c763ca0de2b0b2441444ca3e21b28e20c0cffbec3e2fac0df5a39e12a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/604ed3c763ca0de2b0b2441444ca3e21b28e20c0cffbec3e2fac0df5a39e12a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/604ed3c763ca0de2b0b2441444ca3e21b28e20c0cffbec3e2fac0df5a39e12a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/604ed3c763ca0de2b0b2441444ca3e21b28e20c0cffbec3e2fac0df5a39e12a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.439889041 +0000 UTC m=+0.114148939 container init eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.347152854 +0000 UTC m=+0.021412762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.445058713 +0000 UTC m=+0.119318621 container start eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.447633644 +0000 UTC m=+0.121893542 container attach eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:47:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Dec 13 03:47:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Dec 13 03:47:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Dec 13 03:47:57 compute-0 ceph-osd[85653]: osd.0 pg_epoch: 123 pg[9.1e( v 74'1443 (0'0,74'1443] local-lis/les=122/123 n=7 ec=48/36 lis/c=120/67 les/c/f=121/68/0 sis=122) [0] r=0 lpr=122 pi=[67,122)/1 crt=74'1443 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:57 compute-0 ceph-mon[75071]: osdmap e122: 3 total, 3 up, 3 in
Dec 13 03:47:57 compute-0 ceph-mon[75071]: pgmap v240: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Dec 13 03:47:57 compute-0 ceph-mon[75071]: 2.11 scrub starts
Dec 13 03:47:57 compute-0 ceph-mon[75071]: 2.11 scrub ok
Dec 13 03:47:57 compute-0 ceph-mon[75071]: osdmap e123: 3 total, 3 up, 3 in
Dec 13 03:47:57 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Dec 13 03:47:57 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]: {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     "0": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "devices": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "/dev/loop3"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             ],
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_name": "ceph_lv0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_size": "21470642176",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "name": "ceph_lv0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "tags": {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_name": "ceph",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.crush_device_class": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.encrypted": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.objectstore": "bluestore",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_id": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.vdo": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.with_tpm": "0"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             },
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "vg_name": "ceph_vg0"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         }
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     ],
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     "1": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "devices": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "/dev/loop4"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             ],
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_name": "ceph_lv1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_size": "21470642176",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "name": "ceph_lv1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "tags": {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_name": "ceph",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.crush_device_class": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.encrypted": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.objectstore": "bluestore",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_id": "1",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.vdo": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.with_tpm": "0"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             },
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "vg_name": "ceph_vg1"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         }
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     ],
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     "2": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "devices": [
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "/dev/loop5"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             ],
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_name": "ceph_lv2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_size": "21470642176",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "name": "ceph_lv2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "tags": {
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.cluster_name": "ceph",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.crush_device_class": "",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.encrypted": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.objectstore": "bluestore",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osd_id": "2",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.vdo": "0",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:                 "ceph.with_tpm": "0"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             },
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "type": "block",
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:             "vg_name": "ceph_vg2"
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:         }
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]:     ]
Dec 13 03:47:57 compute-0 hardcore_tesla[102260]: }
Dec 13 03:47:57 compute-0 systemd[1]: libpod-eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee.scope: Deactivated successfully.
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.72638129 +0000 UTC m=+0.400641188 container died eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-604ed3c763ca0de2b0b2441444ca3e21b28e20c0cffbec3e2fac0df5a39e12a2-merged.mount: Deactivated successfully.
Dec 13 03:47:57 compute-0 podman[102243]: 2025-12-13 03:47:57.773973833 +0000 UTC m=+0.448233731 container remove eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_tesla, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:47:57 compute-0 systemd[1]: libpod-conmon-eb22d1589e5f01424b16ecc74cb04649255ba90e78e5f80b1b6fd909c2654fee.scope: Deactivated successfully.
Dec 13 03:47:57 compute-0 sudo[102166]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:57 compute-0 sudo[102283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:47:57 compute-0 sudo[102283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:57 compute-0 sudo[102283]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:57 compute-0 sudo[102308]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:47:57 compute-0 sudo[102308]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.207083176 +0000 UTC m=+0.033165396 container create cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:47:58 compute-0 systemd[1]: Started libpod-conmon-cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350.scope.
Dec 13 03:47:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:58 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Dec 13 03:47:58 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.269820535 +0000 UTC m=+0.095902775 container init cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.276309594 +0000 UTC m=+0.102391814 container start cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.278662169 +0000 UTC m=+0.104744389 container attach cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:47:58 compute-0 thirsty_mestorf[102362]: 167 167
Dec 13 03:47:58 compute-0 systemd[1]: libpod-cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350.scope: Deactivated successfully.
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.280403107 +0000 UTC m=+0.106485327 container died cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:47:58 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 123 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=122/123 n=7 ec=48/36 lis/c=72/72 les/c/f=73/73/0 sis=122) [1]/[2] async=[1] r=0 lpr=122 pi=[72,122)/1 crt=43'1441 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.193276745 +0000 UTC m=+0.019358985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-f6ef0df27f9fc96372e052c75e337202282f3825547ba9017796f851cdd758a5-merged.mount: Deactivated successfully.
Dec 13 03:47:58 compute-0 podman[102346]: 2025-12-13 03:47:58.319637309 +0000 UTC m=+0.145719529 container remove cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:47:58 compute-0 systemd[1]: libpod-conmon-cd5a5aa1982165f50ff86b1f1c9984a38782b97f2c44626608bfcad0fc271350.scope: Deactivated successfully.
Dec 13 03:47:58 compute-0 podman[102387]: 2025-12-13 03:47:58.459911457 +0000 UTC m=+0.035177072 container create f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Dec 13 03:47:58 compute-0 ceph-mon[75071]: 3.9 scrub starts
Dec 13 03:47:58 compute-0 ceph-mon[75071]: 3.9 scrub ok
Dec 13 03:47:58 compute-0 ceph-mon[75071]: 7.15 scrub starts
Dec 13 03:47:58 compute-0 ceph-mon[75071]: 7.15 scrub ok
Dec 13 03:47:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Dec 13 03:47:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Dec 13 03:47:58 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 124 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=122/123 n=7 ec=48/36 lis/c=122/72 les/c/f=123/73/0 sis=124 pruub=15.792729378s) [1] async=[1] r=-1 lpr=124 pi=[72,124)/1 crt=43'1441 active pruub 204.072570801s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:58 compute-0 ceph-osd[87731]: osd.2 pg_epoch: 124 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=122/123 n=7 ec=48/36 lis/c=122/72 les/c/f=123/73/0 sis=124 pruub=15.792666435s) [1] r=-1 lpr=124 pi=[72,124)/1 crt=43'1441 unknown NOTIFY pruub 204.072570801s@ mbc={}] state<Start>: transitioning to Stray
Dec 13 03:47:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec 13 03:47:58 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 124 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=122/72 les/c/f=123/73/0 sis=124) [1] r=0 lpr=124 pi=[72,124)/1 pct=0'0 crt=43'1441 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Dec 13 03:47:58 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 124 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=0/0 n=7 ec=48/36 lis/c=122/72 les/c/f=123/73/0 sis=124) [1] r=0 lpr=124 pi=[72,124)/1 crt=43'1441 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Dec 13 03:47:58 compute-0 systemd[1]: Started libpod-conmon-f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d.scope.
Dec 13 03:47:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab79bd59d3756cf469fa1a3a99177af5c398ac96615a982e25ab9c4437fd4a71/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab79bd59d3756cf469fa1a3a99177af5c398ac96615a982e25ab9c4437fd4a71/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab79bd59d3756cf469fa1a3a99177af5c398ac96615a982e25ab9c4437fd4a71/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab79bd59d3756cf469fa1a3a99177af5c398ac96615a982e25ab9c4437fd4a71/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:47:58 compute-0 podman[102387]: 2025-12-13 03:47:58.445265153 +0000 UTC m=+0.020530798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:47:58 compute-0 podman[102387]: 2025-12-13 03:47:58.541800365 +0000 UTC m=+0.117065990 container init f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:47:58 compute-0 podman[102387]: 2025-12-13 03:47:58.54707594 +0000 UTC m=+0.122341555 container start f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 03:47:58 compute-0 podman[102387]: 2025-12-13 03:47:58.550434363 +0000 UTC m=+0.125699978 container attach f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:47:59 compute-0 lvm[102487]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:47:59 compute-0 lvm[102488]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:47:59 compute-0 lvm[102488]: VG ceph_vg1 finished
Dec 13 03:47:59 compute-0 lvm[102487]: VG ceph_vg0 finished
Dec 13 03:47:59 compute-0 lvm[102490]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:47:59 compute-0 lvm[102490]: VG ceph_vg2 finished
Dec 13 03:47:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Dec 13 03:47:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Dec 13 03:47:59 compute-0 suspicious_curie[102404]: {}
Dec 13 03:47:59 compute-0 systemd[1]: libpod-f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d.scope: Deactivated successfully.
Dec 13 03:47:59 compute-0 systemd[1]: libpod-f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d.scope: Consumed 1.356s CPU time.
Dec 13 03:47:59 compute-0 podman[102387]: 2025-12-13 03:47:59.391074083 +0000 UTC m=+0.966339728 container died f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:47:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab79bd59d3756cf469fa1a3a99177af5c398ac96615a982e25ab9c4437fd4a71-merged.mount: Deactivated successfully.
Dec 13 03:47:59 compute-0 podman[102387]: 2025-12-13 03:47:59.445700299 +0000 UTC m=+1.020965924 container remove f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_curie, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:47:59 compute-0 systemd[1]: libpod-conmon-f49e6f5df49e18cc39cbab9b61eb6cbe5ab00f3084e96975d582465f968af95d.scope: Deactivated successfully.
Dec 13 03:47:59 compute-0 sudo[102308]: pam_unix(sudo:session): session closed for user root
Dec 13 03:47:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Dec 13 03:47:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:47:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Dec 13 03:47:59 compute-0 ceph-mon[75071]: osdmap e124: 3 total, 3 up, 3 in
Dec 13 03:47:59 compute-0 ceph-mon[75071]: pgmap v243: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec 13 03:47:59 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:47:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Dec 13 03:47:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:47:59 compute-0 ceph-osd[86683]: osd.1 pg_epoch: 125 pg[9.1f( v 43'1441 (0'0,43'1441] local-lis/les=124/125 n=7 ec=48/36 lis/c=122/72 les/c/f=123/73/0 sis=124) [1] r=0 lpr=124 pi=[72,124)/1 crt=43'1441 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Dec 13 03:47:59 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:47:59 compute-0 sudo[102505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:47:59 compute-0 sudo[102505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:47:59 compute-0 sudo[102505]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 64 B/s, 3 objects/s recovering
Dec 13 03:48:00 compute-0 ceph-mon[75071]: 10.10 scrub starts
Dec 13 03:48:00 compute-0 ceph-mon[75071]: 10.10 scrub ok
Dec 13 03:48:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:48:00 compute-0 ceph-mon[75071]: osdmap e125: 3 total, 3 up, 3 in
Dec 13 03:48:00 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:48:00 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Dec 13 03:48:00 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Dec 13 03:48:01 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Dec 13 03:48:01 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Dec 13 03:48:01 compute-0 ceph-mon[75071]: pgmap v245: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 64 B/s, 3 objects/s recovering
Dec 13 03:48:01 compute-0 ceph-mon[75071]: 2.16 scrub starts
Dec 13 03:48:01 compute-0 ceph-mon[75071]: 2.16 scrub ok
Dec 13 03:48:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 2 objects/s recovering
Dec 13 03:48:02 compute-0 ceph-mon[75071]: 2.7 scrub starts
Dec 13 03:48:02 compute-0 ceph-mon[75071]: 2.7 scrub ok
Dec 13 03:48:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:03 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Dec 13 03:48:03 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Dec 13 03:48:03 compute-0 ceph-mon[75071]: pgmap v246: 305 pgs: 1 peering, 304 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 43 B/s, 2 objects/s recovering
Dec 13 03:48:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec 13 03:48:04 compute-0 ceph-mon[75071]: 2.4 scrub starts
Dec 13 03:48:04 compute-0 ceph-mon[75071]: 2.4 scrub ok
Dec 13 03:48:04 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Dec 13 03:48:04 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Dec 13 03:48:05 compute-0 ceph-mon[75071]: pgmap v247: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Dec 13 03:48:05 compute-0 ceph-mon[75071]: 2.2 scrub starts
Dec 13 03:48:05 compute-0 ceph-mon[75071]: 2.2 scrub ok
Dec 13 03:48:06 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.f scrub starts
Dec 13 03:48:06 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.f scrub ok
Dec 13 03:48:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec 13 03:48:07 compute-0 ceph-mon[75071]: 10.f scrub starts
Dec 13 03:48:07 compute-0 ceph-mon[75071]: 10.f scrub ok
Dec 13 03:48:07 compute-0 ceph-mon[75071]: pgmap v248: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 0 objects/s recovering
Dec 13 03:48:07 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Dec 13 03:48:07 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Dec 13 03:48:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Dec 13 03:48:08 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Dec 13 03:48:08 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Dec 13 03:48:09 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Dec 13 03:48:09 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Dec 13 03:48:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.f scrub starts
Dec 13 03:48:09 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.f scrub ok
Dec 13 03:48:09 compute-0 ceph-mon[75071]: 7.13 scrub starts
Dec 13 03:48:09 compute-0 ceph-mon[75071]: 7.13 scrub ok
Dec 13 03:48:09 compute-0 ceph-mon[75071]: pgmap v249: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 B/s, 0 objects/s recovering
Dec 13 03:48:09 compute-0 ceph-mon[75071]: 7.11 scrub starts
Dec 13 03:48:09 compute-0 ceph-mon[75071]: 7.11 scrub ok
Dec 13 03:48:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:10 compute-0 ceph-mon[75071]: 10.1e scrub starts
Dec 13 03:48:10 compute-0 ceph-mon[75071]: 10.1e scrub ok
Dec 13 03:48:10 compute-0 ceph-mon[75071]: 4.f scrub starts
Dec 13 03:48:10 compute-0 ceph-mon[75071]: 4.f scrub ok
Dec 13 03:48:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Dec 13 03:48:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Dec 13 03:48:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Dec 13 03:48:11 compute-0 ceph-mon[75071]: pgmap v250: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:11 compute-0 ceph-mon[75071]: 8.1c scrub starts
Dec 13 03:48:11 compute-0 ceph-mon[75071]: 8.1c scrub ok
Dec 13 03:48:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Dec 13 03:48:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Dec 13 03:48:12 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:12 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Dec 13 03:48:12 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Dec 13 03:48:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:12 compute-0 ceph-mon[75071]: 5.14 scrub starts
Dec 13 03:48:12 compute-0 ceph-mon[75071]: 5.14 scrub ok
Dec 13 03:48:12 compute-0 ceph-mon[75071]: 3.16 scrub starts
Dec 13 03:48:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:13 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Dec 13 03:48:13 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Dec 13 03:48:13 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.f scrub starts
Dec 13 03:48:13 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.f scrub ok
Dec 13 03:48:13 compute-0 ceph-mon[75071]: 3.16 scrub ok
Dec 13 03:48:13 compute-0 ceph-mon[75071]: 4.4 scrub starts
Dec 13 03:48:13 compute-0 ceph-mon[75071]: 4.4 scrub ok
Dec 13 03:48:13 compute-0 ceph-mon[75071]: pgmap v251: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:13 compute-0 ceph-mon[75071]: 4.11 scrub starts
Dec 13 03:48:13 compute-0 ceph-mon[75071]: 4.11 scrub ok
Dec 13 03:48:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Dec 13 03:48:14 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Dec 13 03:48:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Dec 13 03:48:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Dec 13 03:48:15 compute-0 ceph-mon[75071]: 7.f scrub starts
Dec 13 03:48:15 compute-0 ceph-mon[75071]: 7.f scrub ok
Dec 13 03:48:15 compute-0 ceph-mon[75071]: 4.13 scrub starts
Dec 13 03:48:15 compute-0 ceph-mon[75071]: 4.13 scrub ok
Dec 13 03:48:15 compute-0 ceph-mon[75071]: 3.17 scrub starts
Dec 13 03:48:15 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.c scrub starts
Dec 13 03:48:15 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.c scrub ok
Dec 13 03:48:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Dec 13 03:48:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Dec 13 03:48:15 compute-0 sudo[101762]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:16 compute-0 ceph-mon[75071]: pgmap v252: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:16 compute-0 ceph-mon[75071]: 3.17 scrub ok
Dec 13 03:48:16 compute-0 ceph-mon[75071]: 5.c scrub starts
Dec 13 03:48:16 compute-0 ceph-mon[75071]: 5.c scrub ok
Dec 13 03:48:16 compute-0 ceph-mon[75071]: 8.1d scrub starts
Dec 13 03:48:16 compute-0 sudo[102679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhjnvcuntqiogvvizcumbguxolmhjxgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597696.0484354-137-35269009575650/AnsiballZ_command.py'
Dec 13 03:48:16 compute-0 sudo[102679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:16 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.d scrub starts
Dec 13 03:48:16 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.d scrub ok
Dec 13 03:48:16 compute-0 python3.9[102681]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:48:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:16 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Dec 13 03:48:16 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Dec 13 03:48:17 compute-0 sudo[102679]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:17 compute-0 ceph-mon[75071]: 8.1d scrub ok
Dec 13 03:48:17 compute-0 ceph-mon[75071]: 4.d scrub starts
Dec 13 03:48:17 compute-0 ceph-mon[75071]: 4.d scrub ok
Dec 13 03:48:17 compute-0 ceph-mon[75071]: 5.15 scrub starts
Dec 13 03:48:17 compute-0 ceph-mon[75071]: 5.15 scrub ok
Dec 13 03:48:17 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Dec 13 03:48:17 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Dec 13 03:48:17 compute-0 sudo[102966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gumwkjilpojtyblfohslwurwuihvncmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597697.3398693-145-228676242656164/AnsiballZ_selinux.py'
Dec 13 03:48:17 compute-0 sudo[102966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:18 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Dec 13 03:48:18 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Dec 13 03:48:18 compute-0 python3.9[102968]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec 13 03:48:18 compute-0 sudo[102966]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:18 compute-0 ceph-mon[75071]: pgmap v253: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:18 compute-0 ceph-mon[75071]: 2.5 scrub starts
Dec 13 03:48:18 compute-0 ceph-mon[75071]: 2.5 scrub ok
Dec 13 03:48:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.b scrub starts
Dec 13 03:48:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.b scrub ok
Dec 13 03:48:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:18 compute-0 sudo[103118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udpzmdwzdhhcijkxeqrnzrmkvfkfqchq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597698.5482442-156-273594925993013/AnsiballZ_command.py'
Dec 13 03:48:18 compute-0 sudo[103118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:18 compute-0 python3.9[103120]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec 13 03:48:19 compute-0 sudo[103118]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:19 compute-0 ceph-mon[75071]: 8.12 scrub starts
Dec 13 03:48:19 compute-0 ceph-mon[75071]: 8.12 scrub ok
Dec 13 03:48:19 compute-0 ceph-mon[75071]: 10.b scrub starts
Dec 13 03:48:19 compute-0 ceph-mon[75071]: 10.b scrub ok
Dec 13 03:48:19 compute-0 sudo[103270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zswmcyxikmfazhvhwejpxqudebwdopil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597699.159304-164-181398860229784/AnsiballZ_file.py'
Dec 13 03:48:19 compute-0 sudo[103270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:19 compute-0 python3.9[103272]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:48:19 compute-0 sudo[103270]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:20 compute-0 sudo[103422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfqjklmuefvlhrqxvpafdnqrztgodnfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597699.8082752-172-62893075423362/AnsiballZ_mount.py'
Dec 13 03:48:20 compute-0 sudo[103422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:20 compute-0 ceph-mon[75071]: pgmap v254: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Dec 13 03:48:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Dec 13 03:48:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:20 compute-0 python3.9[103424]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec 13 03:48:20 compute-0 sudo[103422]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:21 compute-0 ceph-mon[75071]: 4.5 scrub starts
Dec 13 03:48:21 compute-0 ceph-mon[75071]: 4.5 scrub ok
Dec 13 03:48:21 compute-0 ceph-mon[75071]: pgmap v255: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:21 compute-0 sudo[103574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msgakzzdtlzhyisslddarhwhawkdjfhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597701.1958864-200-181563139259761/AnsiballZ_file.py'
Dec 13 03:48:21 compute-0 sudo[103574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:21 compute-0 python3.9[103576]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:48:21 compute-0 sudo[103574]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:22 compute-0 sudo[103726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxjehykpubpvhfrmsguevbkpfsyhyki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597701.773209-208-135023764058655/AnsiballZ_stat.py'
Dec 13 03:48:22 compute-0 sudo[103726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:22 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Dec 13 03:48:22 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Dec 13 03:48:22 compute-0 python3.9[103728]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:48:22 compute-0 sudo[103726]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:22 compute-0 ceph-mon[75071]: 8.4 scrub starts
Dec 13 03:48:22 compute-0 ceph-mon[75071]: 8.4 scrub ok
Dec 13 03:48:22 compute-0 sudo[103804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vffizfsnhnqzpnposdwwhmiwqwtxeuum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597701.773209-208-135023764058655/AnsiballZ_file.py'
Dec 13 03:48:22 compute-0 sudo[103804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:22 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Dec 13 03:48:22 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Dec 13 03:48:22 compute-0 python3.9[103806]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:48:22 compute-0 sudo[103804]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:23 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Dec 13 03:48:23 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Dec 13 03:48:23 compute-0 ceph-mon[75071]: pgmap v256: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:23 compute-0 ceph-mon[75071]: 3.15 scrub starts
Dec 13 03:48:23 compute-0 ceph-mon[75071]: 3.15 scrub ok
Dec 13 03:48:23 compute-0 ceph-mon[75071]: 3.18 scrub starts
Dec 13 03:48:23 compute-0 ceph-mon[75071]: 3.18 scrub ok
Dec 13 03:48:23 compute-0 sudo[103956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlgmpmznslpbnyaqmbohqtglaslyolzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597703.1475525-229-250829872467677/AnsiballZ_stat.py'
Dec 13 03:48:23 compute-0 sudo[103956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:23 compute-0 python3.9[103958]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:48:23 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Dec 13 03:48:23 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Dec 13 03:48:23 compute-0 sudo[103956]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:23 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Dec 13 03:48:24 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Dec 13 03:48:24 compute-0 ceph-mon[75071]: 8.18 scrub starts
Dec 13 03:48:24 compute-0 ceph-mon[75071]: 8.18 scrub ok
Dec 13 03:48:24 compute-0 ceph-mon[75071]: 7.1c scrub starts
Dec 13 03:48:24 compute-0 ceph-mon[75071]: 7.1c scrub ok
Dec 13 03:48:24 compute-0 sudo[104110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozgmjrgwcgrdfaelkcjgagwmaizirxrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597704.0120878-242-108692663837043/AnsiballZ_getent.py'
Dec 13 03:48:24 compute-0 sudo[104110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:24 compute-0 python3.9[104112]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec 13 03:48:24 compute-0 sudo[104110]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:25 compute-0 sudo[104263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuetmfuydrsguvbltanttvxfnfqzfjjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597704.8309367-252-98306855235901/AnsiballZ_getent.py'
Dec 13 03:48:25 compute-0 sudo[104263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:25 compute-0 python3.9[104265]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec 13 03:48:25 compute-0 sudo[104263]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:25 compute-0 ceph-mon[75071]: pgmap v257: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:25 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.d scrub starts
Dec 13 03:48:25 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.d scrub ok
Dec 13 03:48:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Dec 13 03:48:25 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Dec 13 03:48:26 compute-0 sudo[104416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xridcheegctddmlaqzsokrpzdweqfolq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597705.4941154-260-44299037264251/AnsiballZ_group.py'
Dec 13 03:48:26 compute-0 sudo[104416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:26 compute-0 python3.9[104418]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 03:48:26 compute-0 sudo[104416]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:26 compute-0 ceph-mon[75071]: 2.d scrub starts
Dec 13 03:48:26 compute-0 ceph-mon[75071]: 2.d scrub ok
Dec 13 03:48:26 compute-0 ceph-mon[75071]: 8.1a scrub starts
Dec 13 03:48:26 compute-0 ceph-mon[75071]: 8.1a scrub ok
Dec 13 03:48:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:26 compute-0 sudo[104568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slqwnmjsyzmrrlswyrqioylromwqbcfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597706.4425359-269-98035538753134/AnsiballZ_file.py'
Dec 13 03:48:26 compute-0 sudo[104568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:26 compute-0 python3.9[104570]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec 13 03:48:26 compute-0 sudo[104568]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:27 compute-0 ceph-mon[75071]: pgmap v258: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:27 compute-0 sudo[104720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-diutegmrplhjygdsukkaxzrvuconhbhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597707.246003-280-27349950426735/AnsiballZ_dnf.py'
Dec 13 03:48:27 compute-0 sudo[104720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:27 compute-0 python3.9[104722]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:48:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Dec 13 03:48:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Dec 13 03:48:28 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Dec 13 03:48:28 compute-0 ceph-mon[75071]: 3.11 scrub starts
Dec 13 03:48:28 compute-0 ceph-mon[75071]: 3.11 scrub ok
Dec 13 03:48:28 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Dec 13 03:48:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:29 compute-0 sudo[104720]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:29 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Dec 13 03:48:29 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Dec 13 03:48:29 compute-0 ceph-mon[75071]: 3.12 scrub starts
Dec 13 03:48:29 compute-0 ceph-mon[75071]: 3.12 scrub ok
Dec 13 03:48:29 compute-0 ceph-mon[75071]: pgmap v259: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:29 compute-0 sudo[104873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrizkacqhbbiyyaqngnatqxxjcupkadz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597709.2177598-288-180450125220849/AnsiballZ_file.py'
Dec 13 03:48:29 compute-0 sudo[104873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:29 compute-0 python3.9[104875]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:48:29 compute-0 sudo[104873]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:29 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.a scrub starts
Dec 13 03:48:30 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 7.a scrub ok
Dec 13 03:48:30 compute-0 sudo[105025]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykiqmfqyjqidpxxkazmnoaiklecyyljh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597709.842786-296-28518059244774/AnsiballZ_stat.py'
Dec 13 03:48:30 compute-0 sudo[105025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:30 compute-0 python3.9[105027]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:48:30 compute-0 sudo[105025]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Dec 13 03:48:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Dec 13 03:48:30 compute-0 ceph-mon[75071]: 2.13 scrub starts
Dec 13 03:48:30 compute-0 ceph-mon[75071]: 2.13 scrub ok
Dec 13 03:48:30 compute-0 ceph-mon[75071]: 7.a scrub starts
Dec 13 03:48:30 compute-0 ceph-mon[75071]: 7.a scrub ok
Dec 13 03:48:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:30 compute-0 sudo[105103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgekuggtnvltkjbqckincdkbmzzrxpoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597709.842786-296-28518059244774/AnsiballZ_file.py'
Dec 13 03:48:30 compute-0 sudo[105103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:30 compute-0 python3.9[105105]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:48:30 compute-0 sudo[105103]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:31 compute-0 sudo[105255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhkvnhbpgcswznwqgxcenumdzkdozvjx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597710.945336-309-114263744063679/AnsiballZ_stat.py'
Dec 13 03:48:31 compute-0 sudo[105255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:31 compute-0 python3.9[105257]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:48:31 compute-0 sudo[105255]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:31 compute-0 ceph-mon[75071]: 8.14 scrub starts
Dec 13 03:48:31 compute-0 ceph-mon[75071]: 8.14 scrub ok
Dec 13 03:48:31 compute-0 ceph-mon[75071]: pgmap v260: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:31 compute-0 sudo[105333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqkkmllpgtezlpfikoidwauystsdjqwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597710.945336-309-114263744063679/AnsiballZ_file.py'
Dec 13 03:48:31 compute-0 sudo[105333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:31 compute-0 python3.9[105335]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:48:31 compute-0 sudo[105333]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:32 compute-0 sudo[105485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bidwukblloajzgyqezsquizalueecidv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597712.099902-324-226182439250564/AnsiballZ_dnf.py'
Dec 13 03:48:32 compute-0 sudo[105485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:32 compute-0 python3.9[105487]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:48:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:33 compute-0 ceph-mon[75071]: pgmap v261: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:34 compute-0 sudo[105485]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:34 compute-0 python3.9[105638]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:48:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Dec 13 03:48:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Dec 13 03:48:35 compute-0 python3.9[105790]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec 13 03:48:35 compute-0 ceph-mon[75071]: pgmap v262: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:36 compute-0 python3.9[105940]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:48:36 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.e scrub starts
Dec 13 03:48:36 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.e scrub ok
Dec 13 03:48:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:36 compute-0 ceph-mon[75071]: 7.1b scrub starts
Dec 13 03:48:36 compute-0 ceph-mon[75071]: 7.1b scrub ok
Dec 13 03:48:36 compute-0 ceph-mon[75071]: 10.e scrub starts
Dec 13 03:48:36 compute-0 ceph-mon[75071]: 10.e scrub ok
Dec 13 03:48:37 compute-0 sudo[106090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnlocfsibctafgguciuzijtpqowxkps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597716.4895773-365-98235324494656/AnsiballZ_systemd.py'
Dec 13 03:48:37 compute-0 sudo[106090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:37 compute-0 python3.9[106092]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:48:37 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec 13 03:48:37 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Dec 13 03:48:37 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec 13 03:48:37 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec 13 03:48:37 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Dec 13 03:48:37 compute-0 sudo[106090]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:37 compute-0 ceph-mon[75071]: pgmap v263: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:38 compute-0 python3.9[106253]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec 13 03:48:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.f scrub starts
Dec 13 03:48:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.f scrub ok
Dec 13 03:48:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Dec 13 03:48:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Dec 13 03:48:40 compute-0 ceph-mon[75071]: 5.f scrub starts
Dec 13 03:48:40 compute-0 ceph-mon[75071]: 5.f scrub ok
Dec 13 03:48:40 compute-0 ceph-mon[75071]: pgmap v264: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:40 compute-0 ceph-mon[75071]: 10.15 scrub starts
Dec 13 03:48:40 compute-0 ceph-mon[75071]: 10.15 scrub ok
Dec 13 03:48:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Dec 13 03:48:40 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Dec 13 03:48:40 compute-0 sudo[106403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgzyrlfafpmklkxuowxsujofbebdcxha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597720.0010102-422-54804264498893/AnsiballZ_systemd.py'
Dec 13 03:48:40 compute-0 sudo[106403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:48:40
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', 'default.rgw.control', 'images']
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:48:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:40 compute-0 python3.9[106405]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:48:40 compute-0 sudo[106403]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:41 compute-0 ceph-mon[75071]: 8.1b scrub starts
Dec 13 03:48:41 compute-0 ceph-mon[75071]: 8.1b scrub ok
Dec 13 03:48:41 compute-0 sudo[106557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzsdwigtefbdoblcapwwxarootllbmed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597720.8243043-422-128136848431240/AnsiballZ_systemd.py'
Dec 13 03:48:41 compute-0 sudo[106557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:41 compute-0 python3.9[106559]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:48:41 compute-0 sudo[106557]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:41 compute-0 sshd-session[99850]: Connection closed by 192.168.122.30 port 39900
Dec 13 03:48:41 compute-0 sshd-session[99847]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:48:41 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Dec 13 03:48:41 compute-0 systemd[1]: session-35.scope: Consumed 1min 4.048s CPU time.
Dec 13 03:48:41 compute-0 systemd-logind[796]: Session 35 logged out. Waiting for processes to exit.
Dec 13 03:48:41 compute-0 systemd-logind[796]: Removed session 35.
Dec 13 03:48:42 compute-0 ceph-mon[75071]: pgmap v265: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:48:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:43 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Dec 13 03:48:43 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Dec 13 03:48:44 compute-0 ceph-mon[75071]: pgmap v266: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:44 compute-0 ceph-mon[75071]: 6.8 scrub starts
Dec 13 03:48:44 compute-0 ceph-mon[75071]: 6.8 scrub ok
Dec 13 03:48:44 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Dec 13 03:48:44 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Dec 13 03:48:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:46 compute-0 ceph-mon[75071]: 4.9 scrub starts
Dec 13 03:48:46 compute-0 ceph-mon[75071]: 4.9 scrub ok
Dec 13 03:48:46 compute-0 ceph-mon[75071]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:47 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.d scrub starts
Dec 13 03:48:47 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.d scrub ok
Dec 13 03:48:47 compute-0 sshd-session[106586]: Accepted publickey for zuul from 192.168.122.30 port 55484 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:48:47 compute-0 systemd-logind[796]: New session 36 of user zuul.
Dec 13 03:48:47 compute-0 systemd[1]: Started Session 36 of User zuul.
Dec 13 03:48:47 compute-0 sshd-session[106586]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:48:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:48 compute-0 ceph-mon[75071]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:48 compute-0 ceph-mon[75071]: 10.d scrub starts
Dec 13 03:48:48 compute-0 ceph-mon[75071]: 10.d scrub ok
Dec 13 03:48:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Dec 13 03:48:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Dec 13 03:48:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Dec 13 03:48:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Dec 13 03:48:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:48 compute-0 python3.9[106739]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:48:49 compute-0 ceph-mon[75071]: 10.9 scrub starts
Dec 13 03:48:49 compute-0 ceph-mon[75071]: 10.9 scrub ok
Dec 13 03:48:49 compute-0 sudo[106893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmoinccatddyswbnpjpjjzhanztifckg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597729.1435099-36-181745347360018/AnsiballZ_getent.py'
Dec 13 03:48:49 compute-0 sudo[106893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:49 compute-0 python3.9[106895]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec 13 03:48:49 compute-0 sudo[106893]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:50 compute-0 ceph-mon[75071]: 2.3 scrub starts
Dec 13 03:48:50 compute-0 ceph-mon[75071]: 2.3 scrub ok
Dec 13 03:48:50 compute-0 ceph-mon[75071]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:50 compute-0 sudo[107046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mspxowrbmmxczktkqunhbijcyrqwnveh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597730.0833771-48-68826736077469/AnsiballZ_setup.py'
Dec 13 03:48:50 compute-0 sudo[107046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:50 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Dec 13 03:48:50 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Dec 13 03:48:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:50 compute-0 python3.9[107048]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:48:50 compute-0 sudo[107046]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:51 compute-0 ceph-mon[75071]: 5.9 scrub starts
Dec 13 03:48:51 compute-0 ceph-mon[75071]: 5.9 scrub ok
Dec 13 03:48:51 compute-0 ceph-mon[75071]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:51 compute-0 sudo[107130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrxnqlfqaxjqupbqigpidwvinjnwydlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597730.0833771-48-68826736077469/AnsiballZ_dnf.py'
Dec 13 03:48:51 compute-0 sudo[107130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Dec 13 03:48:51 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Dec 13 03:48:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Dec 13 03:48:51 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Dec 13 03:48:51 compute-0 python3.9[107132]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:48:52 compute-0 ceph-mon[75071]: 5.16 scrub starts
Dec 13 03:48:52 compute-0 ceph-mon[75071]: 5.16 scrub ok
Dec 13 03:48:52 compute-0 ceph-mon[75071]: 8.6 scrub starts
Dec 13 03:48:52 compute-0 ceph-mon[75071]: 8.6 scrub ok
Dec 13 03:48:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:53 compute-0 sudo[107130]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:53 compute-0 ceph-mon[75071]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Dec 13 03:48:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Dec 13 03:48:53 compute-0 sudo[107283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-catcjlzvbywusifjnvzmpdfnzvpylnba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597733.4061673-62-218175667918575/AnsiballZ_dnf.py'
Dec 13 03:48:53 compute-0 sudo[107283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:53 compute-0 python3.9[107285]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:48:54 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Dec 13 03:48:54 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Dec 13 03:48:54 compute-0 ceph-mon[75071]: 10.19 scrub starts
Dec 13 03:48:54 compute-0 ceph-mon[75071]: 10.19 scrub ok
Dec 13 03:48:54 compute-0 ceph-mon[75071]: 11.15 scrub starts
Dec 13 03:48:54 compute-0 ceph-mon[75071]: 11.15 scrub ok
Dec 13 03:48:54 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Dec 13 03:48:54 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Dec 13 03:48:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:54 compute-0 systemd[76438]: Created slice User Background Tasks Slice.
Dec 13 03:48:54 compute-0 systemd[76438]: Starting Cleanup of User's Temporary Files and Directories...
Dec 13 03:48:54 compute-0 systemd[76438]: Finished Cleanup of User's Temporary Files and Directories.
Dec 13 03:48:55 compute-0 sudo[107283]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:55 compute-0 ceph-mon[75071]: 10.1a scrub starts
Dec 13 03:48:55 compute-0 ceph-mon[75071]: 10.1a scrub ok
Dec 13 03:48:55 compute-0 ceph-mon[75071]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Dec 13 03:48:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Dec 13 03:48:55 compute-0 sudo[107437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rprjcnwnkmxxxvgekofbswgoihytmbbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597735.299641-70-9612306697820/AnsiballZ_systemd.py'
Dec 13 03:48:55 compute-0 sudo[107437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Dec 13 03:48:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Dec 13 03:48:56 compute-0 python3.9[107439]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:48:56 compute-0 sudo[107437]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:56 compute-0 ceph-mon[75071]: 4.14 scrub starts
Dec 13 03:48:56 compute-0 ceph-mon[75071]: 4.14 scrub ok
Dec 13 03:48:56 compute-0 ceph-mon[75071]: 11.12 scrub starts
Dec 13 03:48:56 compute-0 ceph-mon[75071]: 11.12 scrub ok
Dec 13 03:48:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:56 compute-0 python3.9[107592]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:48:57 compute-0 ceph-mon[75071]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:57 compute-0 sudo[107742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsdimhyqeckyfwlvyeekhsoitrkgpvsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597737.1948025-88-22771789296791/AnsiballZ_sefcontext.py'
Dec 13 03:48:57 compute-0 sudo[107742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:57 compute-0 python3.9[107744]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec 13 03:48:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:48:57 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.d scrub starts
Dec 13 03:48:57 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.d scrub ok
Dec 13 03:48:58 compute-0 sudo[107742]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:58 compute-0 ceph-mon[75071]: 11.d scrub starts
Dec 13 03:48:58 compute-0 ceph-mon[75071]: 11.d scrub ok
Dec 13 03:48:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Dec 13 03:48:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Dec 13 03:48:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:59 compute-0 python3.9[107894]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:48:59 compute-0 ceph-mon[75071]: 5.12 scrub starts
Dec 13 03:48:59 compute-0 ceph-mon[75071]: 5.12 scrub ok
Dec 13 03:48:59 compute-0 ceph-mon[75071]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:48:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Dec 13 03:48:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Dec 13 03:48:59 compute-0 sudo[108050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daykjcxgsgfmilwmgmjrdyszgjskxypq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597739.4339657-106-206837003446962/AnsiballZ_dnf.py'
Dec 13 03:48:59 compute-0 sudo[108050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:48:59 compute-0 sudo[108053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:48:59 compute-0 sudo[108053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:48:59 compute-0 sudo[108053]: pam_unix(sudo:session): session closed for user root
Dec 13 03:48:59 compute-0 sudo[108078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:48:59 compute-0 sudo[108078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:48:59 compute-0 python3.9[108052]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:00 compute-0 sudo[108078]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:00 compute-0 ceph-mon[75071]: 2.15 scrub starts
Dec 13 03:49:00 compute-0 ceph-mon[75071]: 2.15 scrub ok
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:49:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:49:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:00 compute-0 sudo[108135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:49:00 compute-0 sudo[108135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:00 compute-0 sudo[108135]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:00 compute-0 sudo[108160]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:49:00 compute-0 sudo[108160]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.790879965 +0000 UTC m=+0.047051144 container create 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:49:00 compute-0 systemd[1]: Started libpod-conmon-484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f.scope.
Dec 13 03:49:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.765595552 +0000 UTC m=+0.021766731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.879070453 +0000 UTC m=+0.135241652 container init 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.88760086 +0000 UTC m=+0.143772029 container start 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.891145306 +0000 UTC m=+0.147316485 container attach 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:49:00 compute-0 beautiful_goodall[108213]: 167 167
Dec 13 03:49:00 compute-0 systemd[1]: libpod-484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f.scope: Deactivated successfully.
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.893800076 +0000 UTC m=+0.149971255 container died 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:49:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-62093035b53a8fda4e0f811994c8be23a53b83a683e4a13b033f65f5528ef386-merged.mount: Deactivated successfully.
Dec 13 03:49:00 compute-0 podman[108197]: 2025-12-13 03:49:00.946194331 +0000 UTC m=+0.202365510 container remove 484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_goodall, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:49:00 compute-0 systemd[1]: libpod-conmon-484eb52fc5d0ec24fcf0b8c55625e03cf403427d60056e72fb698c6ebe57e35f.scope: Deactivated successfully.
Dec 13 03:49:00 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.b scrub starts
Dec 13 03:49:00 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.b scrub ok
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.095390505 +0000 UTC m=+0.042100492 container create 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:49:01 compute-0 systemd[1]: Started libpod-conmon-8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb.scope.
Dec 13 03:49:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.0772073 +0000 UTC m=+0.023917317 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.181159988 +0000 UTC m=+0.127869995 container init 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.188374971 +0000 UTC m=+0.135084958 container start 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.192549482 +0000 UTC m=+0.139259469 container attach 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:49:01 compute-0 sudo[108050]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:49:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:49:01 compute-0 ceph-mon[75071]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:01 compute-0 ceph-mon[75071]: 11.b scrub starts
Dec 13 03:49:01 compute-0 ceph-mon[75071]: 11.b scrub ok
Dec 13 03:49:01 compute-0 vigorous_bartik[108256]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:49:01 compute-0 vigorous_bartik[108256]: --> All data devices are unavailable
Dec 13 03:49:01 compute-0 systemd[1]: libpod-8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb.scope: Deactivated successfully.
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.673322746 +0000 UTC m=+0.620032743 container died 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:49:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b91b093cba453fa9e95c3e119e8e53d6ed0379ea3751c04fe85b596318d0496-merged.mount: Deactivated successfully.
Dec 13 03:49:01 compute-0 podman[108240]: 2025-12-13 03:49:01.713149906 +0000 UTC m=+0.659859883 container remove 8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:49:01 compute-0 systemd[1]: libpod-conmon-8c658a5968e86034aa1b7b6e3f25cecd2d57b017fc1764b1e4da942b71034cfb.scope: Deactivated successfully.
Dec 13 03:49:01 compute-0 sudo[108160]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:01 compute-0 sudo[108443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwupvhkqbrtpnnakoyhzjjylscfoovaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597741.3985746-114-234563871992531/AnsiballZ_command.py'
Dec 13 03:49:01 compute-0 sudo[108443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:01 compute-0 sudo[108433]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:49:01 compute-0 sudo[108433]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:01 compute-0 sudo[108433]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:01 compute-0 sudo[108465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:49:01 compute-0 sudo[108465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:01 compute-0 python3.9[108462]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.116844557 +0000 UTC m=+0.039335038 container create be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:49:02 compute-0 systemd[1]: Started libpod-conmon-be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7.scope.
Dec 13 03:49:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.181659553 +0000 UTC m=+0.104150054 container init be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.187684975 +0000 UTC m=+0.110175456 container start be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.190941051 +0000 UTC m=+0.113431532 container attach be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:49:02 compute-0 stupefied_sinoussi[108524]: 167 167
Dec 13 03:49:02 compute-0 systemd[1]: libpod-be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7.scope: Deactivated successfully.
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.192243545 +0000 UTC m=+0.114734026 container died be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.098883619 +0000 UTC m=+0.021374120 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-990845c4a648736f197c97c880b893d20cc75e9cf2953e08dfc9da56ba7ecd92-merged.mount: Deactivated successfully.
Dec 13 03:49:02 compute-0 podman[108508]: 2025-12-13 03:49:02.232913119 +0000 UTC m=+0.155403600 container remove be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_sinoussi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:49:02 compute-0 systemd[1]: libpod-conmon-be01ec0d348f5e5faae4a56c682c1ed1dcb2da70ae3ed0dfeda7310b29274ab7.scope: Deactivated successfully.
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.376357219 +0000 UTC m=+0.040395307 container create 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:49:02 compute-0 systemd[1]: Started libpod-conmon-84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068.scope.
Dec 13 03:49:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e13bab82250cf81147b838d1e009c74c385fdcc23f3df10d0fbc05310c25da9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e13bab82250cf81147b838d1e009c74c385fdcc23f3df10d0fbc05310c25da9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e13bab82250cf81147b838d1e009c74c385fdcc23f3df10d0fbc05310c25da9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e13bab82250cf81147b838d1e009c74c385fdcc23f3df10d0fbc05310c25da9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.357977909 +0000 UTC m=+0.022016017 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.456445701 +0000 UTC m=+0.120483829 container init 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.462221926 +0000 UTC m=+0.126260024 container start 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.466994183 +0000 UTC m=+0.131032301 container attach 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 03:49:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:02 compute-0 sudo[108443]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:02 compute-0 nervous_goodall[108615]: {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     "0": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "devices": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "/dev/loop3"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             ],
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_name": "ceph_lv0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_size": "21470642176",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "name": "ceph_lv0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "tags": {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_name": "ceph",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.crush_device_class": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.encrypted": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.objectstore": "bluestore",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_id": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.vdo": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.with_tpm": "0"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             },
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "vg_name": "ceph_vg0"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         }
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     ],
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     "1": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "devices": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "/dev/loop4"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             ],
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_name": "ceph_lv1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_size": "21470642176",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "name": "ceph_lv1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "tags": {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_name": "ceph",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.crush_device_class": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.encrypted": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.objectstore": "bluestore",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_id": "1",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.vdo": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.with_tpm": "0"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             },
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "vg_name": "ceph_vg1"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         }
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     ],
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     "2": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "devices": [
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "/dev/loop5"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             ],
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_name": "ceph_lv2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_size": "21470642176",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "name": "ceph_lv2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "tags": {
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.cluster_name": "ceph",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.crush_device_class": "",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.encrypted": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.objectstore": "bluestore",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osd_id": "2",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.vdo": "0",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:                 "ceph.with_tpm": "0"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             },
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "type": "block",
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:             "vg_name": "ceph_vg2"
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:         }
Dec 13 03:49:02 compute-0 nervous_goodall[108615]:     ]
Dec 13 03:49:02 compute-0 nervous_goodall[108615]: }
Dec 13 03:49:02 compute-0 systemd[1]: libpod-84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068.scope: Deactivated successfully.
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.744227516 +0000 UTC m=+0.408265614 container died 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:49:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e13bab82250cf81147b838d1e009c74c385fdcc23f3df10d0fbc05310c25da9-merged.mount: Deactivated successfully.
Dec 13 03:49:02 compute-0 podman[108547]: 2025-12-13 03:49:02.784093497 +0000 UTC m=+0.448131595 container remove 84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_goodall, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:49:02 compute-0 systemd[1]: libpod-conmon-84ad38c46da51f7a3989e2ccc48f343cdd1bae3effc66c71f6d12e4bbf6c0068.scope: Deactivated successfully.
Dec 13 03:49:02 compute-0 sudo[108465]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:02 compute-0 sudo[108747]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:49:02 compute-0 sudo[108747]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:02 compute-0 sudo[108747]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:02 compute-0 sudo[108807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:49:02 compute-0 sudo[108807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.189378471 +0000 UTC m=+0.036824591 container create 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:49:03 compute-0 systemd[1]: Started libpod-conmon-5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85.scope.
Dec 13 03:49:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.264008149 +0000 UTC m=+0.111454269 container init 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.270727718 +0000 UTC m=+0.118173828 container start 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.174898096 +0000 UTC m=+0.022344236 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:03 compute-0 sudo[108945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juycqpgyqjtedltcnxsjbjqvabtkivml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597742.8560412-122-60713637774036/AnsiballZ_file.py'
Dec 13 03:49:03 compute-0 sudo[108945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:03 compute-0 gracious_darwin[108916]: 167 167
Dec 13 03:49:03 compute-0 systemd[1]: libpod-5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85.scope: Deactivated successfully.
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.275789973 +0000 UTC m=+0.123236093 container attach 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.276246495 +0000 UTC m=+0.123692615 container died 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:49:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2613a4d609eaa7bc0bbdca6fc97afd4bdffce1ef91ca805aeed871a58c020562-merged.mount: Deactivated successfully.
Dec 13 03:49:03 compute-0 podman[108876]: 2025-12-13 03:49:03.310410044 +0000 UTC m=+0.157856164 container remove 5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:49:03 compute-0 systemd[1]: libpod-conmon-5fef4f212034b481f01435f449ef32dd069809724efeb9fedd0b879280aedb85.scope: Deactivated successfully.
Dec 13 03:49:03 compute-0 podman[108968]: 2025-12-13 03:49:03.447629439 +0000 UTC m=+0.035719482 container create 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:49:03 compute-0 python3.9[108949]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 03:49:03 compute-0 sudo[108945]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:03 compute-0 systemd[1]: Started libpod-conmon-7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019.scope.
Dec 13 03:49:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9130d7c3c6365467db0e155b1703d70b5c158caf855e1507d492e6ec7190d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:03 compute-0 podman[108968]: 2025-12-13 03:49:03.431851389 +0000 UTC m=+0.019941432 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9130d7c3c6365467db0e155b1703d70b5c158caf855e1507d492e6ec7190d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9130d7c3c6365467db0e155b1703d70b5c158caf855e1507d492e6ec7190d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f4f9130d7c3c6365467db0e155b1703d70b5c158caf855e1507d492e6ec7190d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:49:03 compute-0 podman[108968]: 2025-12-13 03:49:03.542975798 +0000 UTC m=+0.131065871 container init 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:49:03 compute-0 podman[108968]: 2025-12-13 03:49:03.552587965 +0000 UTC m=+0.140678018 container start 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 03:49:03 compute-0 podman[108968]: 2025-12-13 03:49:03.555914863 +0000 UTC m=+0.144004926 container attach 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:49:03 compute-0 ceph-mon[75071]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:04 compute-0 lvm[109213]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:49:04 compute-0 lvm[109213]: VG ceph_vg1 finished
Dec 13 03:49:04 compute-0 lvm[109212]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:49:04 compute-0 lvm[109212]: VG ceph_vg0 finished
Dec 13 03:49:04 compute-0 lvm[109215]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:49:04 compute-0 lvm[109215]: VG ceph_vg2 finished
Dec 13 03:49:04 compute-0 python3.9[109191]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:49:04 compute-0 agitated_satoshi[108984]: {}
Dec 13 03:49:04 compute-0 systemd[1]: libpod-7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019.scope: Deactivated successfully.
Dec 13 03:49:04 compute-0 systemd[1]: libpod-7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019.scope: Consumed 1.265s CPU time.
Dec 13 03:49:04 compute-0 podman[108968]: 2025-12-13 03:49:04.355735103 +0000 UTC m=+0.943825146 container died 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:49:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4f9130d7c3c6365467db0e155b1703d70b5c158caf855e1507d492e6ec7190d-merged.mount: Deactivated successfully.
Dec 13 03:49:04 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Dec 13 03:49:04 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Dec 13 03:49:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:04 compute-0 podman[108968]: 2025-12-13 03:49:04.544670136 +0000 UTC m=+1.132760179 container remove 7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:49:04 compute-0 systemd[1]: libpod-conmon-7a28750591c6cceb13e89b546b3ec00e30c2094c05b4a5db24a62f653d942019.scope: Deactivated successfully.
Dec 13 03:49:04 compute-0 sudo[108807]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:49:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:49:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:04 compute-0 sudo[109399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfflkyaaxinyvsbotxjkpznszishdpma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597744.41077-138-260554524912316/AnsiballZ_dnf.py'
Dec 13 03:49:04 compute-0 sudo[109399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:04 compute-0 sudo[109362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:49:04 compute-0 sudo[109362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:49:04 compute-0 sudo[109362]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:04 compute-0 python3.9[109405]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Dec 13 03:49:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Dec 13 03:49:05 compute-0 ceph-mon[75071]: 10.6 scrub starts
Dec 13 03:49:05 compute-0 ceph-mon[75071]: 10.6 scrub ok
Dec 13 03:49:05 compute-0 ceph-mon[75071]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:49:05 compute-0 ceph-mon[75071]: 11.8 scrub starts
Dec 13 03:49:05 compute-0 ceph-mon[75071]: 11.8 scrub ok
Dec 13 03:49:06 compute-0 sudo[109399]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:06 compute-0 sudo[109558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkwwfknuvwoesrwuqlatcscnexcanexq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597746.3290157-147-242064095614425/AnsiballZ_dnf.py'
Dec 13 03:49:06 compute-0 sudo[109558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:06 compute-0 python3.9[109560]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:07 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Dec 13 03:49:07 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Dec 13 03:49:07 compute-0 ceph-mon[75071]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:08 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Dec 13 03:49:08 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Dec 13 03:49:08 compute-0 sudo[109558]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Dec 13 03:49:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Dec 13 03:49:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:08 compute-0 sudo[109711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bklvwxshokrtkmfjywyprffagzdsdlbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597748.4346278-159-48091062742793/AnsiballZ_stat.py'
Dec 13 03:49:08 compute-0 sudo[109711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:08 compute-0 ceph-mon[75071]: 4.2 scrub starts
Dec 13 03:49:08 compute-0 ceph-mon[75071]: 4.2 scrub ok
Dec 13 03:49:08 compute-0 ceph-mon[75071]: 11.3 scrub starts
Dec 13 03:49:08 compute-0 ceph-mon[75071]: 11.3 scrub ok
Dec 13 03:49:08 compute-0 python3.9[109713]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:49:08 compute-0 sudo[109711]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:09 compute-0 sudo[109865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xifiiezjuzgvisikzsgovuqhtukainuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597749.0323768-167-89385650196370/AnsiballZ_slurp.py'
Dec 13 03:49:09 compute-0 sudo[109865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.f scrub starts
Dec 13 03:49:09 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.f scrub ok
Dec 13 03:49:09 compute-0 python3.9[109867]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Dec 13 03:49:09 compute-0 sudo[109865]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:09 compute-0 ceph-mon[75071]: 4.12 scrub starts
Dec 13 03:49:09 compute-0 ceph-mon[75071]: 4.12 scrub ok
Dec 13 03:49:09 compute-0 ceph-mon[75071]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:09 compute-0 ceph-mon[75071]: 8.f scrub starts
Dec 13 03:49:09 compute-0 ceph-mon[75071]: 8.f scrub ok
Dec 13 03:49:10 compute-0 sshd-session[106589]: Connection closed by 192.168.122.30 port 55484
Dec 13 03:49:10 compute-0 sshd-session[106586]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:49:10 compute-0 systemd-logind[796]: Session 36 logged out. Waiting for processes to exit.
Dec 13 03:49:10 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Dec 13 03:49:10 compute-0 systemd[1]: session-36.scope: Consumed 17.108s CPU time.
Dec 13 03:49:10 compute-0 systemd-logind[796]: Removed session 36.
Dec 13 03:49:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Dec 13 03:49:11 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Dec 13 03:49:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Dec 13 03:49:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Dec 13 03:49:11 compute-0 ceph-mon[75071]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:11 compute-0 ceph-mon[75071]: 11.18 scrub starts
Dec 13 03:49:11 compute-0 ceph-mon[75071]: 11.18 scrub ok
Dec 13 03:49:11 compute-0 ceph-mon[75071]: 8.1f scrub starts
Dec 13 03:49:11 compute-0 ceph-mon[75071]: 8.1f scrub ok
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:13 compute-0 ceph-mon[75071]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:14 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Dec 13 03:49:14 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Dec 13 03:49:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.a scrub starts
Dec 13 03:49:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.a scrub ok
Dec 13 03:49:15 compute-0 sshd-session[109892]: Accepted publickey for zuul from 192.168.122.30 port 40316 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:49:15 compute-0 systemd-logind[796]: New session 37 of user zuul.
Dec 13 03:49:15 compute-0 systemd[1]: Started Session 37 of User zuul.
Dec 13 03:49:15 compute-0 sshd-session[109892]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:49:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Dec 13 03:49:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Dec 13 03:49:15 compute-0 ceph-mon[75071]: 4.10 scrub starts
Dec 13 03:49:15 compute-0 ceph-mon[75071]: 4.10 scrub ok
Dec 13 03:49:15 compute-0 ceph-mon[75071]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:15 compute-0 ceph-mon[75071]: 6.a scrub starts
Dec 13 03:49:15 compute-0 ceph-mon[75071]: 6.a scrub ok
Dec 13 03:49:16 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Dec 13 03:49:16 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Dec 13 03:49:16 compute-0 python3.9[110045]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:49:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:16 compute-0 ceph-mon[75071]: 6.5 scrub starts
Dec 13 03:49:16 compute-0 ceph-mon[75071]: 6.5 scrub ok
Dec 13 03:49:17 compute-0 python3.9[110199]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:49:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:17 compute-0 ceph-mon[75071]: 5.11 scrub starts
Dec 13 03:49:17 compute-0 ceph-mon[75071]: 5.11 scrub ok
Dec 13 03:49:17 compute-0 ceph-mon[75071]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:18 compute-0 python3.9[110392]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:49:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:18 compute-0 sshd-session[109895]: Connection closed by 192.168.122.30 port 40316
Dec 13 03:49:18 compute-0 sshd-session[109892]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:49:18 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Dec 13 03:49:18 compute-0 systemd[1]: session-37.scope: Consumed 2.181s CPU time.
Dec 13 03:49:18 compute-0 systemd-logind[796]: Session 37 logged out. Waiting for processes to exit.
Dec 13 03:49:18 compute-0 systemd-logind[796]: Removed session 37.
Dec 13 03:49:19 compute-0 ceph-mon[75071]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:20 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Dec 13 03:49:20 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Dec 13 03:49:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:20 compute-0 ceph-mon[75071]: 11.1a scrub starts
Dec 13 03:49:20 compute-0 ceph-mon[75071]: 11.1a scrub ok
Dec 13 03:49:21 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Dec 13 03:49:21 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Dec 13 03:49:21 compute-0 ceph-mon[75071]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:21 compute-0 ceph-mon[75071]: 11.1b scrub starts
Dec 13 03:49:21 compute-0 ceph-mon[75071]: 11.1b scrub ok
Dec 13 03:49:22 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Dec 13 03:49:22 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Dec 13 03:49:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:23 compute-0 sshd-session[110418]: Accepted publickey for zuul from 192.168.122.30 port 47300 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:49:23 compute-0 systemd-logind[796]: New session 38 of user zuul.
Dec 13 03:49:23 compute-0 systemd[1]: Started Session 38 of User zuul.
Dec 13 03:49:23 compute-0 sshd-session[110418]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:49:24 compute-0 ceph-mon[75071]: 5.13 scrub starts
Dec 13 03:49:24 compute-0 ceph-mon[75071]: 5.13 scrub ok
Dec 13 03:49:24 compute-0 ceph-mon[75071]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Dec 13 03:49:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Dec 13 03:49:24 compute-0 python3.9[110571]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:49:25 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Dec 13 03:49:25 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Dec 13 03:49:25 compute-0 python3.9[110725]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:49:26 compute-0 ceph-mon[75071]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:26 compute-0 ceph-mon[75071]: 6.9 scrub starts
Dec 13 03:49:26 compute-0 ceph-mon[75071]: 6.9 scrub ok
Dec 13 03:49:26 compute-0 ceph-mon[75071]: 11.1c scrub starts
Dec 13 03:49:26 compute-0 ceph-mon[75071]: 11.1c scrub ok
Dec 13 03:49:26 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Dec 13 03:49:26 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Dec 13 03:49:26 compute-0 sudo[110879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfazuhxaqaonfopmncrnbkbbynqgzndr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597766.1292021-40-191379492177774/AnsiballZ_setup.py'
Dec 13 03:49:26 compute-0 sudo[110879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:26 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Dec 13 03:49:26 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Dec 13 03:49:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:26 compute-0 python3.9[110881]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:49:26 compute-0 sudo[110879]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:27 compute-0 ceph-mon[75071]: 11.1e scrub starts
Dec 13 03:49:27 compute-0 ceph-mon[75071]: 11.1e scrub ok
Dec 13 03:49:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Dec 13 03:49:27 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Dec 13 03:49:27 compute-0 sudo[110963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkcaacsuneunehojhqxtzkeoadikkxyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597766.1292021-40-191379492177774/AnsiballZ_dnf.py'
Dec 13 03:49:27 compute-0 sudo[110963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:27 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Dec 13 03:49:27 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Dec 13 03:49:27 compute-0 python3.9[110965]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:28 compute-0 ceph-mon[75071]: 2.17 scrub starts
Dec 13 03:49:28 compute-0 ceph-mon[75071]: 2.17 scrub ok
Dec 13 03:49:28 compute-0 ceph-mon[75071]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:28 compute-0 ceph-mon[75071]: 11.1f scrub starts
Dec 13 03:49:28 compute-0 ceph-mon[75071]: 11.1f scrub ok
Dec 13 03:49:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:28 compute-0 sudo[110963]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:29 compute-0 ceph-mon[75071]: 10.12 scrub starts
Dec 13 03:49:29 compute-0 ceph-mon[75071]: 10.12 scrub ok
Dec 13 03:49:29 compute-0 sudo[111116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wevmsnynnpghsmugzqxbqwugthxbbzbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597768.9537737-52-88892257911608/AnsiballZ_setup.py'
Dec 13 03:49:29 compute-0 sudo[111116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:29 compute-0 python3.9[111118]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:49:29 compute-0 sudo[111116]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:30 compute-0 ceph-mon[75071]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:30 compute-0 sudo[111311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbaeyllqsyzykuovxizaodiwsynuolbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597770.0050344-63-16130242816510/AnsiballZ_file.py'
Dec 13 03:49:30 compute-0 sudo[111311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:30 compute-0 python3.9[111313]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:49:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Dec 13 03:49:30 compute-0 sudo[111311]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:30 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Dec 13 03:49:31 compute-0 ceph-mon[75071]: 6.7 scrub starts
Dec 13 03:49:31 compute-0 ceph-mon[75071]: 6.7 scrub ok
Dec 13 03:49:31 compute-0 sudo[111463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxpohukrulclmvinwdlvuiuvgywuhirx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597770.7801118-71-127119302953407/AnsiballZ_command.py'
Dec 13 03:49:31 compute-0 sudo[111463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:31 compute-0 python3.9[111465]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:49:31 compute-0 sudo[111463]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:32 compute-0 sudo[111628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqgsciktwhkuumfnfglccfqzpzmsvxcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597771.563992-79-115016576147612/AnsiballZ_stat.py'
Dec 13 03:49:32 compute-0 sudo[111628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:32 compute-0 ceph-mon[75071]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:32 compute-0 python3.9[111630]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:49:32 compute-0 sudo[111628]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:32 compute-0 sudo[111706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmemqihhjlwftemuanatysulhbxpnpol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597771.563992-79-115016576147612/AnsiballZ_file.py'
Dec 13 03:49:32 compute-0 sudo[111706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:32 compute-0 python3.9[111708]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:49:32 compute-0 sudo[111706]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:33 compute-0 sudo[111858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-notkbjrucyodorrwkdweqlwgdqpjpptv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597772.873358-91-144504727012418/AnsiballZ_stat.py'
Dec 13 03:49:33 compute-0 sudo[111858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:33 compute-0 python3.9[111860]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:49:33 compute-0 sudo[111858]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:33 compute-0 sudo[111936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qppnvpeqxioxhlkrczhnmcoxgobhzhmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597772.873358-91-144504727012418/AnsiballZ_file.py'
Dec 13 03:49:33 compute-0 sudo[111936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:33 compute-0 python3.9[111938]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:49:33 compute-0 sudo[111936]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:34 compute-0 ceph-mon[75071]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Dec 13 03:49:34 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Dec 13 03:49:34 compute-0 sudo[112088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktkxpfeatzciwykgvritnwafwesndpcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597773.9223347-104-161049497647141/AnsiballZ_ini_file.py'
Dec 13 03:49:34 compute-0 sudo[112088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:34 compute-0 python3.9[112090]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:49:34 compute-0 sudo[112088]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:34 compute-0 sudo[112240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kveushtvirwfsnhxvqkamgbsibdfnjhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597774.6297283-104-159555581130217/AnsiballZ_ini_file.py'
Dec 13 03:49:34 compute-0 sudo[112240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:35 compute-0 python3.9[112242]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:49:35 compute-0 sudo[112240]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:35 compute-0 ceph-mon[75071]: 11.11 scrub starts
Dec 13 03:49:35 compute-0 ceph-mon[75071]: 11.11 scrub ok
Dec 13 03:49:35 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Dec 13 03:49:35 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Dec 13 03:49:35 compute-0 sudo[112392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzrxsrgqaudlhcldydfjdnfamddsywwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597775.1970873-104-121675325022745/AnsiballZ_ini_file.py'
Dec 13 03:49:35 compute-0 sudo[112392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:35 compute-0 python3.9[112394]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:49:35 compute-0 sudo[112392]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Dec 13 03:49:35 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Dec 13 03:49:36 compute-0 sudo[112544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcbkwcnjysudndhrnwultjaqxlpytawp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597775.7769134-104-62129084314702/AnsiballZ_ini_file.py'
Dec 13 03:49:36 compute-0 sudo[112544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:36 compute-0 ceph-mon[75071]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:36 compute-0 ceph-mon[75071]: 6.3 scrub starts
Dec 13 03:49:36 compute-0 python3.9[112546]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:49:36 compute-0 sudo[112544]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:36 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Dec 13 03:49:36 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Dec 13 03:49:36 compute-0 sudo[112696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abhzygbbtrqevfveunmgygklkhrpuflw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597776.45903-135-136575188211648/AnsiballZ_dnf.py'
Dec 13 03:49:36 compute-0 sudo[112696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:36 compute-0 python3.9[112698]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:37 compute-0 ceph-mon[75071]: 11.2 scrub starts
Dec 13 03:49:37 compute-0 ceph-mon[75071]: 11.2 scrub ok
Dec 13 03:49:37 compute-0 ceph-mon[75071]: 6.3 scrub ok
Dec 13 03:49:37 compute-0 ceph-mon[75071]: 6.0 scrub starts
Dec 13 03:49:37 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Dec 13 03:49:37 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Dec 13 03:49:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:38 compute-0 ceph-mon[75071]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:38 compute-0 ceph-mon[75071]: 6.0 scrub ok
Dec 13 03:49:38 compute-0 sudo[112696]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Dec 13 03:49:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Dec 13 03:49:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:38 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Dec 13 03:49:38 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Dec 13 03:49:39 compute-0 sudo[112849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gguwpnohowusjcqkbkquleprsokgohfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597778.7652078-146-121110073124464/AnsiballZ_setup.py'
Dec 13 03:49:39 compute-0 sudo[112849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:39 compute-0 ceph-mon[75071]: 11.9 scrub starts
Dec 13 03:49:39 compute-0 ceph-mon[75071]: 11.9 scrub ok
Dec 13 03:49:39 compute-0 ceph-mon[75071]: 11.14 scrub starts
Dec 13 03:49:39 compute-0 ceph-mon[75071]: 11.14 scrub ok
Dec 13 03:49:39 compute-0 python3.9[112851]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:49:39 compute-0 sudo[112849]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:39 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 6.f scrub starts
Dec 13 03:49:39 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 6.f scrub ok
Dec 13 03:49:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Dec 13 03:49:39 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Dec 13 03:49:39 compute-0 sudo[113003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usdjxaydebzacllqtzxuvizoloycwloc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597779.4913626-154-126544538477962/AnsiballZ_stat.py'
Dec 13 03:49:39 compute-0 sudo[113003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:39 compute-0 python3.9[113005]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:49:39 compute-0 sudo[113003]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:40 compute-0 ceph-mon[75071]: 10.14 scrub starts
Dec 13 03:49:40 compute-0 ceph-mon[75071]: 10.14 scrub ok
Dec 13 03:49:40 compute-0 ceph-mon[75071]: pgmap v294: 305 pgs: 305 active+clean; 460 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:40 compute-0 sudo[113155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwxzrumjdhhcuhmzuzttvmsejbouauze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597780.1348407-163-182833552065681/AnsiballZ_stat.py'
Dec 13 03:49:40 compute-0 sudo[113155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:49:40
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'default.rgw.control', 'images', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data']
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:49:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:40 compute-0 python3.9[113157]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:49:40 compute-0 sudo[113155]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.f scrub starts
Dec 13 03:49:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.f scrub ok
Dec 13 03:49:41 compute-0 sudo[113307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmrmfpckjqrsjbewdvsxjcqkduglhema ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597780.7969341-173-30171075541775/AnsiballZ_command.py'
Dec 13 03:49:41 compute-0 sudo[113307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:41 compute-0 ceph-mon[75071]: 6.f scrub starts
Dec 13 03:49:41 compute-0 ceph-mon[75071]: 6.f scrub ok
Dec 13 03:49:41 compute-0 ceph-mon[75071]: 11.10 scrub starts
Dec 13 03:49:41 compute-0 ceph-mon[75071]: 11.10 scrub ok
Dec 13 03:49:41 compute-0 python3.9[113309]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:49:41 compute-0 sudo[113307]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:41 compute-0 sudo[113460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpbhmrgzktsmgsaiefupuvsdguwaeemv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597781.5213027-183-43987104239357/AnsiballZ_service_facts.py'
Dec 13 03:49:41 compute-0 sudo[113460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:42 compute-0 python3.9[113462]: ansible-service_facts Invoked
Dec 13 03:49:42 compute-0 network[113479]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:49:42 compute-0 network[113480]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:49:42 compute-0 network[113481]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:49:42 compute-0 ceph-mon[75071]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:42 compute-0 ceph-mon[75071]: 11.f scrub starts
Dec 13 03:49:42 compute-0 ceph-mon[75071]: 11.f scrub ok
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:49:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:44 compute-0 ceph-mon[75071]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:44 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Dec 13 03:49:44 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Dec 13 03:49:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:45 compute-0 ceph-mon[75071]: 11.13 scrub starts
Dec 13 03:49:45 compute-0 ceph-mon[75071]: 11.13 scrub ok
Dec 13 03:49:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Dec 13 03:49:45 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Dec 13 03:49:45 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Dec 13 03:49:45 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Dec 13 03:49:45 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Dec 13 03:49:45 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Dec 13 03:49:45 compute-0 sudo[113460]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:46 compute-0 ceph-mon[75071]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:46 compute-0 ceph-mon[75071]: 11.16 scrub starts
Dec 13 03:49:46 compute-0 ceph-mon[75071]: 11.16 scrub ok
Dec 13 03:49:46 compute-0 ceph-mon[75071]: 11.4 scrub starts
Dec 13 03:49:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:46 compute-0 sudo[113764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okesreeljeejywlsosubfjfggalerjdw ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1765597786.3425658-198-266153069502899/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1765597786.3425658-198-266153069502899/args'
Dec 13 03:49:46 compute-0 sudo[113764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:46 compute-0 sudo[113764]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:47 compute-0 sudo[113931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtmusjyplzzzenywfplccjltuwjmnwgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597786.95632-209-77606083906346/AnsiballZ_dnf.py'
Dec 13 03:49:47 compute-0 sudo[113931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:47 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Dec 13 03:49:47 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Dec 13 03:49:47 compute-0 ceph-mon[75071]: 9.8 scrub starts
Dec 13 03:49:47 compute-0 ceph-mon[75071]: 9.8 scrub ok
Dec 13 03:49:47 compute-0 ceph-mon[75071]: 11.4 scrub ok
Dec 13 03:49:47 compute-0 python3.9[113933]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:49:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.c scrub starts
Dec 13 03:49:48 compute-0 ceph-mon[75071]: pgmap v298: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:48 compute-0 ceph-mon[75071]: 11.0 scrub starts
Dec 13 03:49:48 compute-0 ceph-mon[75071]: 11.0 scrub ok
Dec 13 03:49:48 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.c scrub ok
Dec 13 03:49:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:48 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.e scrub starts
Dec 13 03:49:48 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.e scrub ok
Dec 13 03:49:48 compute-0 sudo[113931]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:49 compute-0 ceph-mon[75071]: 11.c scrub starts
Dec 13 03:49:49 compute-0 ceph-mon[75071]: 11.c scrub ok
Dec 13 03:49:49 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Dec 13 03:49:49 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Dec 13 03:49:49 compute-0 sudo[114084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfdjceclrgukghbcwzfmupeljznhaycu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597789.1249564-222-210299460128594/AnsiballZ_package_facts.py'
Dec 13 03:49:49 compute-0 sudo[114084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:49 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Dec 13 03:49:49 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Dec 13 03:49:50 compute-0 python3.9[114086]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec 13 03:49:50 compute-0 sudo[114084]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:50 compute-0 ceph-mon[75071]: pgmap v299: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:50 compute-0 ceph-mon[75071]: 9.e scrub starts
Dec 13 03:49:50 compute-0 ceph-mon[75071]: 9.e scrub ok
Dec 13 03:49:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:50 compute-0 sudo[114236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkkmfrjnanvemybgvbjlczxmnupdvkmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597790.5928042-232-107014703056405/AnsiballZ_stat.py'
Dec 13 03:49:50 compute-0 sudo[114236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:51 compute-0 python3.9[114238]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:49:51 compute-0 sudo[114236]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:51 compute-0 sudo[114314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eynodfevhtdlxzmxjtezfbhpuzhkgggt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597790.5928042-232-107014703056405/AnsiballZ_file.py'
Dec 13 03:49:51 compute-0 sudo[114314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:51 compute-0 ceph-mon[75071]: 9.17 scrub starts
Dec 13 03:49:51 compute-0 ceph-mon[75071]: 9.17 scrub ok
Dec 13 03:49:51 compute-0 ceph-mon[75071]: 11.19 scrub starts
Dec 13 03:49:51 compute-0 ceph-mon[75071]: 11.19 scrub ok
Dec 13 03:49:51 compute-0 python3.9[114316]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:49:51 compute-0 sudo[114314]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:52 compute-0 sudo[114466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnvfdwganmpyejgowdxemljehhlnaony ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597791.743504-244-52633045683046/AnsiballZ_stat.py'
Dec 13 03:49:52 compute-0 sudo[114466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:49:52 compute-0 python3.9[114468]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:49:52 compute-0 sudo[114466]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:52 compute-0 ceph-mon[75071]: pgmap v300: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:52 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.a scrub starts
Dec 13 03:49:52 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.a scrub ok
Dec 13 03:49:52 compute-0 sudo[114544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuptgthvixdjsyfjnporpzenlhsjwgkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597791.743504-244-52633045683046/AnsiballZ_file.py'
Dec 13 03:49:52 compute-0 sudo[114544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:52 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.f scrub starts
Dec 13 03:49:52 compute-0 python3.9[114546]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:49:52 compute-0 sudo[114544]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:52 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.f scrub ok
Dec 13 03:49:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:53 compute-0 ceph-mon[75071]: 11.a scrub starts
Dec 13 03:49:53 compute-0 ceph-mon[75071]: 11.a scrub ok
Dec 13 03:49:53 compute-0 ceph-mon[75071]: pgmap v301: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:53 compute-0 ceph-mon[75071]: 9.f scrub starts
Dec 13 03:49:53 compute-0 ceph-mon[75071]: 9.f scrub ok
Dec 13 03:49:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Dec 13 03:49:53 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Dec 13 03:49:53 compute-0 sudo[114696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkupelqyptmosoygsmxzdzqitwwlykwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597793.0150936-262-217181528808211/AnsiballZ_lineinfile.py'
Dec 13 03:49:53 compute-0 sudo[114696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:53 compute-0 python3.9[114698]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:49:53 compute-0 sudo[114696]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:54 compute-0 ceph-mon[75071]: 11.5 scrub starts
Dec 13 03:49:54 compute-0 ceph-mon[75071]: 11.5 scrub ok
Dec 13 03:49:54 compute-0 sudo[114848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzsgaclkxgvpciusdfuqauuqgcrsdgvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597794.0912755-277-192351460589368/AnsiballZ_setup.py'
Dec 13 03:49:54 compute-0 sudo[114848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:54 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.c scrub starts
Dec 13 03:49:54 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.c scrub ok
Dec 13 03:49:54 compute-0 python3.9[114850]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:49:54 compute-0 sudo[114848]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Dec 13 03:49:55 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Dec 13 03:49:55 compute-0 ceph-mon[75071]: pgmap v302: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:55 compute-0 ceph-mon[75071]: 9.c scrub starts
Dec 13 03:49:55 compute-0 ceph-mon[75071]: 9.c scrub ok
Dec 13 03:49:55 compute-0 sudo[114932]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqoekavuzgzgghbagufbjvcffcmxbmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597794.0912755-277-192351460589368/AnsiballZ_systemd.py'
Dec 13 03:49:55 compute-0 sudo[114932]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:49:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Dec 13 03:49:55 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Dec 13 03:49:55 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.e scrub starts
Dec 13 03:49:55 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.e scrub ok
Dec 13 03:49:55 compute-0 python3.9[114934]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:49:55 compute-0 sudo[114932]: pam_unix(sudo:session): session closed for user root
Dec 13 03:49:56 compute-0 ceph-mon[75071]: 11.7 scrub starts
Dec 13 03:49:56 compute-0 ceph-mon[75071]: 11.7 scrub ok
Dec 13 03:49:56 compute-0 ceph-mon[75071]: 9.7 scrub starts
Dec 13 03:49:56 compute-0 ceph-mon[75071]: 9.7 scrub ok
Dec 13 03:49:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Dec 13 03:49:56 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Dec 13 03:49:56 compute-0 sshd-session[110421]: Connection closed by 192.168.122.30 port 47300
Dec 13 03:49:56 compute-0 sshd-session[110418]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:49:56 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Dec 13 03:49:56 compute-0 systemd[1]: session-38.scope: Consumed 22.686s CPU time.
Dec 13 03:49:56 compute-0 systemd-logind[796]: Session 38 logged out. Waiting for processes to exit.
Dec 13 03:49:56 compute-0 systemd-logind[796]: Removed session 38.
Dec 13 03:49:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:56 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Dec 13 03:49:56 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 11.e scrub starts
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 11.e scrub ok
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 11.1d scrub starts
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 11.1d scrub ok
Dec 13 03:49:57 compute-0 ceph-mon[75071]: pgmap v303: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 9.6 scrub starts
Dec 13 03:49:57 compute-0 ceph-mon[75071]: 9.6 scrub ok
Dec 13 03:49:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:49:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Dec 13 03:49:58 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Dec 13 03:49:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Dec 13 03:49:59 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Dec 13 03:49:59 compute-0 ceph-mon[75071]: 6.2 scrub starts
Dec 13 03:49:59 compute-0 ceph-mon[75071]: 6.2 scrub ok
Dec 13 03:49:59 compute-0 ceph-mon[75071]: pgmap v304: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:49:59 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Dec 13 03:49:59 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Dec 13 03:50:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:00 compute-0 ceph-mon[75071]: 6.6 scrub starts
Dec 13 03:50:00 compute-0 ceph-mon[75071]: 6.6 scrub ok
Dec 13 03:50:00 compute-0 ceph-mon[75071]: 9.19 scrub starts
Dec 13 03:50:00 compute-0 ceph-mon[75071]: 9.19 scrub ok
Dec 13 03:50:01 compute-0 ceph-mon[75071]: pgmap v305: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:01 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Dec 13 03:50:01 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Dec 13 03:50:02 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Dec 13 03:50:02 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Dec 13 03:50:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:02 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Dec 13 03:50:02 compute-0 sshd-session[114961]: Accepted publickey for zuul from 192.168.122.30 port 41314 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:50:02 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Dec 13 03:50:02 compute-0 systemd-logind[796]: New session 39 of user zuul.
Dec 13 03:50:02 compute-0 systemd[1]: Started Session 39 of User zuul.
Dec 13 03:50:02 compute-0 ceph-mon[75071]: 11.17 scrub starts
Dec 13 03:50:02 compute-0 sshd-session[114961]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:50:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:03 compute-0 sudo[115114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmldbldzoblgflvwdzbexonxedicfenj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597802.7131732-22-165346868096248/AnsiballZ_file.py'
Dec 13 03:50:03 compute-0 sudo[115114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:03 compute-0 python3.9[115116]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:03 compute-0 sudo[115114]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:03 compute-0 ceph-mon[75071]: 11.17 scrub ok
Dec 13 03:50:03 compute-0 ceph-mon[75071]: 6.4 scrub starts
Dec 13 03:50:03 compute-0 ceph-mon[75071]: 6.4 scrub ok
Dec 13 03:50:03 compute-0 ceph-mon[75071]: pgmap v306: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:03 compute-0 ceph-mon[75071]: 9.18 scrub starts
Dec 13 03:50:03 compute-0 ceph-mon[75071]: 9.18 scrub ok
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.642409) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803642529, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7295, "num_deletes": 251, "total_data_size": 9833692, "memory_usage": 10040144, "flush_reason": "Manual Compaction"}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803691534, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7773050, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7438, "table_properties": {"data_size": 7745854, "index_size": 17801, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 77114, "raw_average_key_size": 23, "raw_value_size": 7682025, "raw_average_value_size": 2318, "num_data_blocks": 781, "num_entries": 3313, "num_filter_entries": 3313, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597395, "oldest_key_time": 1765597395, "file_creation_time": 1765597803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 49428 microseconds, and 15048 cpu microseconds.
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.691854) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7773050 bytes OK
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.691971) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.693298) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.693314) EVENT_LOG_v1 {"time_micros": 1765597803693309, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.693361) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9801792, prev total WAL file size 9801792, number of live WAL files 2.
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.696136) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7590KB) 13(58KB) 8(1944B)]
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803696255, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7834952, "oldest_snapshot_seqno": -1}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3139 keys, 7787777 bytes, temperature: kUnknown
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803751157, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7787777, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7761013, "index_size": 17822, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7877, "raw_key_size": 75554, "raw_average_key_size": 24, "raw_value_size": 7698545, "raw_average_value_size": 2452, "num_data_blocks": 783, "num_entries": 3139, "num_filter_entries": 3139, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765597803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.751545) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7787777 bytes
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.753112) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.2 rd, 141.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3428, records dropped: 289 output_compression: NoCompression
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.753134) EVENT_LOG_v1 {"time_micros": 1765597803753123, "job": 4, "event": "compaction_finished", "compaction_time_micros": 55115, "compaction_time_cpu_micros": 18569, "output_level": 6, "num_output_files": 1, "total_output_size": 7787777, "num_input_records": 3428, "num_output_records": 3139, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803754698, "job": 4, "event": "table_file_deletion", "file_number": 19}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803754767, "job": 4, "event": "table_file_deletion", "file_number": 13}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597803754811, "job": 4, "event": "table_file_deletion", "file_number": 8}
Dec 13 03:50:03 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:50:03.695885) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:50:03 compute-0 sudo[115267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxyqdaanxuqjvkyxpiulkzjmudojdklp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597803.5174878-34-36198155386056/AnsiballZ_stat.py'
Dec 13 03:50:03 compute-0 sudo[115267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:04 compute-0 python3.9[115269]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:04 compute-0 sudo[115267]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:04 compute-0 sudo[115345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyqluyeythwhavygcfnwpxjflsncsxcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597803.5174878-34-36198155386056/AnsiballZ_file.py'
Dec 13 03:50:04 compute-0 sudo[115345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:04 compute-0 python3.9[115347]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:04 compute-0 sudo[115345]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:04 compute-0 sudo[115372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:50:04 compute-0 sudo[115372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:04 compute-0 sudo[115372]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:04 compute-0 sudo[115397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:50:04 compute-0 sudo[115397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:04 compute-0 sshd-session[114964]: Connection closed by 192.168.122.30 port 41314
Dec 13 03:50:04 compute-0 sshd-session[114961]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:50:04 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Dec 13 03:50:04 compute-0 systemd[1]: session-39.scope: Consumed 1.414s CPU time.
Dec 13 03:50:04 compute-0 systemd-logind[796]: Session 39 logged out. Waiting for processes to exit.
Dec 13 03:50:04 compute-0 systemd-logind[796]: Removed session 39.
Dec 13 03:50:05 compute-0 sudo[115397]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:50:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:50:05 compute-0 sudo[115453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:50:05 compute-0 sudo[115453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:05 compute-0 sudo[115453]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:05 compute-0 sudo[115478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:50:05 compute-0 sudo[115478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Dec 13 03:50:05 compute-0 ceph-osd[87731]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Dec 13 03:50:05 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Dec 13 03:50:05 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Dec 13 03:50:05 compute-0 ceph-mon[75071]: pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:50:05 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.693007228 +0000 UTC m=+0.044330964 container create 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:50:05 compute-0 systemd[1]: Started libpod-conmon-3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69.scope.
Dec 13 03:50:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.67199426 +0000 UTC m=+0.023318026 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.778998765 +0000 UTC m=+0.130322511 container init 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.788423316 +0000 UTC m=+0.139747052 container start 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.791619037 +0000 UTC m=+0.142942793 container attach 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:05 compute-0 vigilant_beaver[115531]: 167 167
Dec 13 03:50:05 compute-0 systemd[1]: libpod-3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69.scope: Deactivated successfully.
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.807132434 +0000 UTC m=+0.158456170 container died 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:50:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-f77e85d9db979d4a0ea8d4cfff15122874b7142b01011877969d2391165c142f-merged.mount: Deactivated successfully.
Dec 13 03:50:05 compute-0 podman[115515]: 2025-12-13 03:50:05.843292087 +0000 UTC m=+0.194615823 container remove 3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_beaver, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:05 compute-0 systemd[1]: libpod-conmon-3cecc6021305d13d8b535fa9128e602f31a76b61b42a0119bd9519186024ff69.scope: Deactivated successfully.
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.013556628 +0000 UTC m=+0.039764717 container create 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:50:06 compute-0 systemd[1]: Started libpod-conmon-66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10.scope.
Dec 13 03:50:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:05.99682456 +0000 UTC m=+0.023032569 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.10482622 +0000 UTC m=+0.131034209 container init 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.114065576 +0000 UTC m=+0.140273555 container start 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.117321389 +0000 UTC m=+0.143529378 container attach 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:50:06 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.d scrub starts
Dec 13 03:50:06 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.d scrub ok
Dec 13 03:50:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:06 compute-0 great_nightingale[115573]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:50:06 compute-0 great_nightingale[115573]: --> All data devices are unavailable
Dec 13 03:50:06 compute-0 ceph-mon[75071]: 9.13 scrub starts
Dec 13 03:50:06 compute-0 ceph-mon[75071]: 9.13 scrub ok
Dec 13 03:50:06 compute-0 ceph-mon[75071]: 11.6 scrub starts
Dec 13 03:50:06 compute-0 ceph-mon[75071]: 11.6 scrub ok
Dec 13 03:50:06 compute-0 systemd[1]: libpod-66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10.scope: Deactivated successfully.
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.657210104 +0000 UTC m=+0.683418093 container died 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:50:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad68fe05cce29bf294fc1b9499ab42834e3235f17d5b656e74e57d8967cc0aad-merged.mount: Deactivated successfully.
Dec 13 03:50:06 compute-0 podman[115556]: 2025-12-13 03:50:06.698584941 +0000 UTC m=+0.724792930 container remove 66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:50:06 compute-0 systemd[1]: libpod-conmon-66838feed02efaa8da995ec511a943c531fc288a81f44986594abb6397febe10.scope: Deactivated successfully.
Dec 13 03:50:06 compute-0 sudo[115478]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:06 compute-0 sudo[115604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:50:06 compute-0 sudo[115604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:06 compute-0 sudo[115604]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:06 compute-0 sudo[115629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:50:06 compute-0 sudo[115629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.128216288 +0000 UTC m=+0.036864783 container create fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:50:07 compute-0 systemd[1]: Started libpod-conmon-fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871.scope.
Dec 13 03:50:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.192400428 +0000 UTC m=+0.101048933 container init fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.199322535 +0000 UTC m=+0.107971010 container start fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:50:07 compute-0 eloquent_wescoff[115683]: 167 167
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.203150673 +0000 UTC m=+0.111799168 container attach fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:50:07 compute-0 systemd[1]: libpod-fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871.scope: Deactivated successfully.
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.204695502 +0000 UTC m=+0.113343987 container died fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.111594253 +0000 UTC m=+0.020242758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-64d3d0b006892869182023a8d22746172026de319e3ccecf79d023c3450c93f5-merged.mount: Deactivated successfully.
Dec 13 03:50:07 compute-0 podman[115667]: 2025-12-13 03:50:07.252023861 +0000 UTC m=+0.160672346 container remove fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_wescoff, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:07 compute-0 systemd[1]: libpod-conmon-fbddf18dc0f3c7926737135518350835ed07a307676524e4c199e7db15cd8871.scope: Deactivated successfully.
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.396030691 +0000 UTC m=+0.040661660 container create a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:50:07 compute-0 systemd[1]: Started libpod-conmon-a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107.scope.
Dec 13 03:50:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f781d042fc8753c41a90e0b9c665160488e817bc6b80de2dc873994992023/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f781d042fc8753c41a90e0b9c665160488e817bc6b80de2dc873994992023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f781d042fc8753c41a90e0b9c665160488e817bc6b80de2dc873994992023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f2f781d042fc8753c41a90e0b9c665160488e817bc6b80de2dc873994992023/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.378262637 +0000 UTC m=+0.022893656 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.475106071 +0000 UTC m=+0.119737050 container init a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.488010512 +0000 UTC m=+0.132641491 container start a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.49226459 +0000 UTC m=+0.136895579 container attach a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 03:50:07 compute-0 ceph-mon[75071]: 6.d scrub starts
Dec 13 03:50:07 compute-0 ceph-mon[75071]: 6.d scrub ok
Dec 13 03:50:07 compute-0 ceph-mon[75071]: pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:07 compute-0 lucid_edison[115723]: {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     "0": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "devices": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "/dev/loop3"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             ],
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_name": "ceph_lv0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_size": "21470642176",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "name": "ceph_lv0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "tags": {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_name": "ceph",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.crush_device_class": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.encrypted": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.objectstore": "bluestore",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_id": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.vdo": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.with_tpm": "0"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             },
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "vg_name": "ceph_vg0"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         }
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     ],
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     "1": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "devices": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "/dev/loop4"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             ],
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_name": "ceph_lv1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_size": "21470642176",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "name": "ceph_lv1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "tags": {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_name": "ceph",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.crush_device_class": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.encrypted": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.objectstore": "bluestore",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_id": "1",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.vdo": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.with_tpm": "0"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             },
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "vg_name": "ceph_vg1"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         }
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     ],
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     "2": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "devices": [
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "/dev/loop5"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             ],
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_name": "ceph_lv2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_size": "21470642176",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "name": "ceph_lv2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "tags": {
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.cluster_name": "ceph",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.crush_device_class": "",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.encrypted": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.objectstore": "bluestore",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osd_id": "2",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.vdo": "0",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:                 "ceph.with_tpm": "0"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             },
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "type": "block",
Dec 13 03:50:07 compute-0 lucid_edison[115723]:             "vg_name": "ceph_vg2"
Dec 13 03:50:07 compute-0 lucid_edison[115723]:         }
Dec 13 03:50:07 compute-0 lucid_edison[115723]:     ]
Dec 13 03:50:07 compute-0 lucid_edison[115723]: }
Dec 13 03:50:07 compute-0 systemd[1]: libpod-a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107.scope: Deactivated successfully.
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.774880761 +0000 UTC m=+0.419511730 container died a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:50:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-3f2f781d042fc8753c41a90e0b9c665160488e817bc6b80de2dc873994992023-merged.mount: Deactivated successfully.
Dec 13 03:50:07 compute-0 podman[115707]: 2025-12-13 03:50:07.814452192 +0000 UTC m=+0.459083171 container remove a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_edison, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:50:07 compute-0 systemd[1]: libpod-conmon-a92f46abe481e10949b68e0af3b17a6cf179e6859fcd8185c8407fe4cc772107.scope: Deactivated successfully.
Dec 13 03:50:07 compute-0 sudo[115629]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:07 compute-0 sudo[115742]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:50:07 compute-0 sudo[115742]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:07 compute-0 sudo[115742]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:07 compute-0 sudo[115767]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:50:07 compute-0 sudo[115767]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.260739474 +0000 UTC m=+0.046982960 container create b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:50:08 compute-0 systemd[1]: Started libpod-conmon-b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308.scope.
Dec 13 03:50:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.e scrub starts
Dec 13 03:50:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:08 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.e scrub ok
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.234779401 +0000 UTC m=+0.021022857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.340719888 +0000 UTC m=+0.126963364 container init b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.352033327 +0000 UTC m=+0.138276783 container start b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.355575328 +0000 UTC m=+0.141818794 container attach b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:50:08 compute-0 admiring_kowalevski[115820]: 167 167
Dec 13 03:50:08 compute-0 systemd[1]: libpod-b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308.scope: Deactivated successfully.
Dec 13 03:50:08 compute-0 conmon[115820]: conmon b8c220a2c9f3530c851b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308.scope/container/memory.events
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.358435841 +0000 UTC m=+0.144679307 container died b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:50:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ea3a2e00af62561d118f4f1c58cf8af7099eb0400f10a87d4465687969cdfca-merged.mount: Deactivated successfully.
Dec 13 03:50:08 compute-0 podman[115804]: 2025-12-13 03:50:08.401905932 +0000 UTC m=+0.188149368 container remove b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_kowalevski, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:08 compute-0 systemd[1]: libpod-conmon-b8c220a2c9f3530c851b49eb5503fd872ad27c71f10a9d7d9eabf433f3c72308.scope: Deactivated successfully.
Dec 13 03:50:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:08 compute-0 podman[115842]: 2025-12-13 03:50:08.546492825 +0000 UTC m=+0.039529940 container create ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:08 compute-0 systemd[1]: Started libpod-conmon-ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b.scope.
Dec 13 03:50:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422610733f2465ee751269bd0f2e08a30b03e44cb3ed035939020f66cb60387c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422610733f2465ee751269bd0f2e08a30b03e44cb3ed035939020f66cb60387c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422610733f2465ee751269bd0f2e08a30b03e44cb3ed035939020f66cb60387c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/422610733f2465ee751269bd0f2e08a30b03e44cb3ed035939020f66cb60387c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:50:08 compute-0 podman[115842]: 2025-12-13 03:50:08.612533293 +0000 UTC m=+0.105570438 container init ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:50:08 compute-0 podman[115842]: 2025-12-13 03:50:08.619115441 +0000 UTC m=+0.112152556 container start ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:50:08 compute-0 podman[115842]: 2025-12-13 03:50:08.622911679 +0000 UTC m=+0.115948814 container attach ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:50:08 compute-0 podman[115842]: 2025-12-13 03:50:08.529619364 +0000 UTC m=+0.022656509 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:50:09 compute-0 lvm[115938]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:50:09 compute-0 lvm[115937]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:50:09 compute-0 lvm[115937]: VG ceph_vg0 finished
Dec 13 03:50:09 compute-0 lvm[115938]: VG ceph_vg1 finished
Dec 13 03:50:09 compute-0 lvm[115940]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:50:09 compute-0 lvm[115940]: VG ceph_vg2 finished
Dec 13 03:50:09 compute-0 mystifying_robinson[115858]: {}
Dec 13 03:50:09 compute-0 systemd[1]: libpod-ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b.scope: Deactivated successfully.
Dec 13 03:50:09 compute-0 systemd[1]: libpod-ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b.scope: Consumed 1.313s CPU time.
Dec 13 03:50:09 compute-0 podman[115842]: 2025-12-13 03:50:09.439105843 +0000 UTC m=+0.932142978 container died ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:50:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-422610733f2465ee751269bd0f2e08a30b03e44cb3ed035939020f66cb60387c-merged.mount: Deactivated successfully.
Dec 13 03:50:09 compute-0 podman[115842]: 2025-12-13 03:50:09.48442624 +0000 UTC m=+0.977463355 container remove ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_robinson, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:50:09 compute-0 systemd[1]: libpod-conmon-ce72a6fae7f1aec371e7332014576b4908e6b26e10040cab43d00198bbe7277b.scope: Deactivated successfully.
Dec 13 03:50:09 compute-0 sudo[115767]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:50:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:50:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:09 compute-0 sudo[115955]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:50:09 compute-0 sudo[115955]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:50:09 compute-0 sudo[115955]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:09 compute-0 ceph-mon[75071]: 6.e scrub starts
Dec 13 03:50:09 compute-0 ceph-mon[75071]: 6.e scrub ok
Dec 13 03:50:09 compute-0 ceph-mon[75071]: pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:50:10 compute-0 sshd-session[115980]: Accepted publickey for zuul from 192.168.122.30 port 39912 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:50:10 compute-0 systemd-logind[796]: New session 40 of user zuul.
Dec 13 03:50:10 compute-0 systemd[1]: Started Session 40 of User zuul.
Dec 13 03:50:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:10 compute-0 sshd-session[115980]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:50:11 compute-0 python3.9[116133]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:50:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Dec 13 03:50:11 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Dec 13 03:50:11 compute-0 ceph-mon[75071]: pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:12 compute-0 sudo[116287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ronlhmfhxknyebqzallzexdspruomovg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597811.9562118-33-77608960698396/AnsiballZ_file.py'
Dec 13 03:50:12 compute-0 sudo[116287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:12 compute-0 python3.9[116289]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:12 compute-0 sudo[116287]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:12 compute-0 ceph-mon[75071]: 11.1 scrub starts
Dec 13 03:50:12 compute-0 ceph-mon[75071]: 11.1 scrub ok
Dec 13 03:50:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:13 compute-0 sudo[116462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yswyyhyfotnrnfzfnethplscdzqbdtis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597812.7177107-41-176325436452528/AnsiballZ_stat.py'
Dec 13 03:50:13 compute-0 sudo[116462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:13 compute-0 python3.9[116464]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:13 compute-0 sudo[116462]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:13 compute-0 sudo[116540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvcktphyrqhpbvkfkuflvmkhnonqeno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597812.7177107-41-176325436452528/AnsiballZ_file.py'
Dec 13 03:50:13 compute-0 sudo[116540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:13 compute-0 python3.9[116542]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.bp9yozzd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:13 compute-0 sudo[116540]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:13 compute-0 ceph-mon[75071]: pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:14 compute-0 sudo[116692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvejguqakzgfkjswwnesepgobrztzsxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597814.0696688-61-86995128124285/AnsiballZ_stat.py'
Dec 13 03:50:14 compute-0 sudo[116692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:14 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Dec 13 03:50:14 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Dec 13 03:50:14 compute-0 python3.9[116694]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:14 compute-0 sudo[116692]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Dec 13 03:50:14 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Dec 13 03:50:14 compute-0 sudo[116770]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agkdkdhfjdxdylidlqcleivmiivgqbgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597814.0696688-61-86995128124285/AnsiballZ_file.py'
Dec 13 03:50:14 compute-0 sudo[116770]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:14 compute-0 python3.9[116772]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dv5e0r7i recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:14 compute-0 sudo[116770]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:14 compute-0 ceph-mon[75071]: 9.11 scrub starts
Dec 13 03:50:14 compute-0 ceph-mon[75071]: 9.11 scrub ok
Dec 13 03:50:15 compute-0 sudo[116922]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crbdhagexakgizqufncnomfugialjqlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597815.0894718-74-130734071833287/AnsiballZ_file.py'
Dec 13 03:50:15 compute-0 sudo[116922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:15 compute-0 python3.9[116924]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:50:15 compute-0 sudo[116922]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Dec 13 03:50:15 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Dec 13 03:50:15 compute-0 sudo[117074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-watdybzhxysvvobppsfarjlnbnecjdzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597815.7089617-82-144319576845407/AnsiballZ_stat.py'
Dec 13 03:50:15 compute-0 sudo[117074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:15 compute-0 ceph-mon[75071]: 6.1 scrub starts
Dec 13 03:50:15 compute-0 ceph-mon[75071]: 6.1 scrub ok
Dec 13 03:50:15 compute-0 ceph-mon[75071]: pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:15 compute-0 ceph-mon[75071]: 9.5 scrub starts
Dec 13 03:50:16 compute-0 python3.9[117076]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:16 compute-0 sudo[117074]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:16 compute-0 sudo[117152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaatcdqwkhoxqzrvstjtazdlaxcjatbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597815.7089617-82-144319576845407/AnsiballZ_file.py'
Dec 13 03:50:16 compute-0 sudo[117152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:16 compute-0 python3.9[117154]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:50:16 compute-0 sudo[117152]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:16 compute-0 sudo[117304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thkzzqfnjogbapeubcxtiumzjaxvdaif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597816.7062018-82-267296603686770/AnsiballZ_stat.py'
Dec 13 03:50:16 compute-0 sudo[117304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:16 compute-0 ceph-mon[75071]: 9.5 scrub ok
Dec 13 03:50:17 compute-0 python3.9[117306]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:17 compute-0 sudo[117304]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:17 compute-0 sudo[117382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hijhuvdvyhdmxkqlzdiccjbortpufnli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597816.7062018-82-267296603686770/AnsiballZ_file.py'
Dec 13 03:50:17 compute-0 sudo[117382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:17 compute-0 python3.9[117384]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:50:17 compute-0 sudo[117382]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:17 compute-0 sudo[117534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvvddukxqteuowvzlbhlkqlwpfhzumal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597817.687775-105-17787081154171/AnsiballZ_file.py'
Dec 13 03:50:17 compute-0 sudo[117534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:17 compute-0 ceph-mon[75071]: pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:18 compute-0 python3.9[117536]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:18 compute-0 sudo[117534]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.c scrub starts
Dec 13 03:50:18 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.c scrub ok
Dec 13 03:50:18 compute-0 sudo[117686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jczuktlkibcvgagxfitpdkjhjjjydxsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597818.3167975-113-275877063683200/AnsiballZ_stat.py'
Dec 13 03:50:18 compute-0 sudo[117686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:18 compute-0 python3.9[117688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:18 compute-0 sudo[117686]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:18 compute-0 sudo[117764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bydajpgbtnzaiwzbfdxbuahrhvnnxvrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597818.3167975-113-275877063683200/AnsiballZ_file.py'
Dec 13 03:50:18 compute-0 sudo[117764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:19 compute-0 python3.9[117766]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:19 compute-0 sudo[117764]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:19 compute-0 sudo[117916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvhtbkkpskxymdbmrhpfdhowmgebdppe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597819.2833989-125-126013073101242/AnsiballZ_stat.py'
Dec 13 03:50:19 compute-0 sudo[117916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:19 compute-0 python3.9[117918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:19 compute-0 sudo[117916]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:19 compute-0 sudo[117994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdktcgmzvclstkkhdvesntcgakkdndbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597819.2833989-125-126013073101242/AnsiballZ_file.py'
Dec 13 03:50:19 compute-0 sudo[117994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:19 compute-0 ceph-mon[75071]: 6.c scrub starts
Dec 13 03:50:19 compute-0 ceph-mon[75071]: 6.c scrub ok
Dec 13 03:50:19 compute-0 ceph-mon[75071]: pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:20 compute-0 python3.9[117996]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:20 compute-0 sudo[117994]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.b scrub starts
Dec 13 03:50:20 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 6.b scrub ok
Dec 13 03:50:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:20 compute-0 sudo[118146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xijupnxcoqqyanwbltnqxptrtrqhedru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597820.274289-137-74484747642111/AnsiballZ_systemd.py'
Dec 13 03:50:20 compute-0 sudo[118146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:21 compute-0 python3.9[118148]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:50:21 compute-0 systemd[1]: Reloading.
Dec 13 03:50:21 compute-0 systemd-sysv-generator[118179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:50:21 compute-0 systemd-rc-local-generator[118176]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:50:21 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Dec 13 03:50:21 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Dec 13 03:50:21 compute-0 sudo[118146]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.b scrub starts
Dec 13 03:50:21 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.b scrub ok
Dec 13 03:50:21 compute-0 ceph-mon[75071]: 6.b scrub starts
Dec 13 03:50:21 compute-0 ceph-mon[75071]: 6.b scrub ok
Dec 13 03:50:21 compute-0 ceph-mon[75071]: pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:21 compute-0 ceph-mon[75071]: 9.b scrub starts
Dec 13 03:50:22 compute-0 sudo[118335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qbqiilgvlgtnkscpykkwmejrmghfhcrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597821.6140645-145-241937280786959/AnsiballZ_stat.py'
Dec 13 03:50:22 compute-0 sudo[118335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:22 compute-0 python3.9[118337]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:22 compute-0 sudo[118335]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:22 compute-0 sudo[118413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-triqwmgccxgvwayuogzjvzacqxfabelb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597821.6140645-145-241937280786959/AnsiballZ_file.py'
Dec 13 03:50:22 compute-0 sudo[118413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:22 compute-0 python3.9[118415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:22 compute-0 sudo[118413]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:22 compute-0 ceph-mon[75071]: 9.15 scrub starts
Dec 13 03:50:22 compute-0 ceph-mon[75071]: 9.15 scrub ok
Dec 13 03:50:22 compute-0 ceph-mon[75071]: 9.b scrub ok
Dec 13 03:50:23 compute-0 sudo[118565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrotfacbeopwrzzlacphrinypeskpwkg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597822.758473-157-74051304881630/AnsiballZ_stat.py'
Dec 13 03:50:23 compute-0 sudo[118565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:23 compute-0 python3.9[118567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:23 compute-0 sudo[118565]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:23 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Dec 13 03:50:23 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Dec 13 03:50:23 compute-0 sudo[118643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxxozkgymzvflkueiaqfjwqpfpbgmnug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597822.758473-157-74051304881630/AnsiballZ_file.py'
Dec 13 03:50:23 compute-0 sudo[118643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:23 compute-0 python3.9[118645]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:23 compute-0 sudo[118643]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:24 compute-0 ceph-mon[75071]: pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:24 compute-0 sudo[118795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xatnllpuxhibwmnjukqahzbwwrqialfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597823.7841017-169-67906273924754/AnsiballZ_systemd.py'
Dec 13 03:50:24 compute-0 sudo[118795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:24 compute-0 python3.9[118797]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:50:24 compute-0 systemd[1]: Reloading.
Dec 13 03:50:24 compute-0 systemd-rc-local-generator[118823]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:50:24 compute-0 systemd-sysv-generator[118828]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:50:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Dec 13 03:50:24 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 03:50:24 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 03:50:24 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 03:50:24 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 03:50:24 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Dec 13 03:50:24 compute-0 sudo[118795]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:25 compute-0 ceph-mon[75071]: 9.14 scrub starts
Dec 13 03:50:25 compute-0 ceph-mon[75071]: 9.14 scrub ok
Dec 13 03:50:25 compute-0 ceph-mon[75071]: 9.16 scrub starts
Dec 13 03:50:25 compute-0 ceph-mon[75071]: 9.16 scrub ok
Dec 13 03:50:25 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Dec 13 03:50:25 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Dec 13 03:50:25 compute-0 python3.9[118989]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:50:25 compute-0 network[119006]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:50:25 compute-0 network[119007]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:50:25 compute-0 network[119008]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:50:26 compute-0 ceph-mon[75071]: pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:27 compute-0 ceph-mon[75071]: 9.10 scrub starts
Dec 13 03:50:27 compute-0 ceph-mon[75071]: 9.10 scrub ok
Dec 13 03:50:27 compute-0 ceph-mon[75071]: pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:27 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Dec 13 03:50:27 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Dec 13 03:50:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:28 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Dec 13 03:50:28 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Dec 13 03:50:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:29 compute-0 ceph-mon[75071]: 9.9 scrub starts
Dec 13 03:50:29 compute-0 ceph-mon[75071]: 9.9 scrub ok
Dec 13 03:50:29 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Dec 13 03:50:29 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Dec 13 03:50:29 compute-0 sudo[119268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkkwifjzipztcckfbaajsdbvlwipwpby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597829.4301183-195-48778673224203/AnsiballZ_stat.py'
Dec 13 03:50:29 compute-0 sudo[119268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:29 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.d scrub starts
Dec 13 03:50:29 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.d scrub ok
Dec 13 03:50:29 compute-0 python3.9[119270]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:29 compute-0 sudo[119268]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:30 compute-0 ceph-mon[75071]: 9.12 scrub starts
Dec 13 03:50:30 compute-0 ceph-mon[75071]: 9.12 scrub ok
Dec 13 03:50:30 compute-0 ceph-mon[75071]: pgmap v319: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:30 compute-0 sudo[119346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxzyabzczszyjsjmtqpgkyeczzebxatb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597829.4301183-195-48778673224203/AnsiballZ_file.py'
Dec 13 03:50:30 compute-0 sudo[119346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:30 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Dec 13 03:50:30 compute-0 python3.9[119348]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:30 compute-0 sudo[119346]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:30 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Dec 13 03:50:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:30 compute-0 sudo[119498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjdszxtfvufttfukrfmahvhgecmnlobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597830.5641177-208-22171098657890/AnsiballZ_file.py'
Dec 13 03:50:30 compute-0 sudo[119498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:31 compute-0 python3.9[119500]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:31 compute-0 ceph-mon[75071]: 9.2 scrub starts
Dec 13 03:50:31 compute-0 ceph-mon[75071]: 9.2 scrub ok
Dec 13 03:50:31 compute-0 ceph-mon[75071]: 9.d scrub starts
Dec 13 03:50:31 compute-0 ceph-mon[75071]: 9.d scrub ok
Dec 13 03:50:31 compute-0 sudo[119498]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.a scrub starts
Dec 13 03:50:31 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.a scrub ok
Dec 13 03:50:31 compute-0 sudo[119650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gavlemfdodiztingebblrkqsgrqructp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597831.1730762-216-217332887624543/AnsiballZ_stat.py'
Dec 13 03:50:31 compute-0 sudo[119650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:31 compute-0 python3.9[119652]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:31 compute-0 sudo[119650]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:31 compute-0 sudo[119728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-volhyozcushewuwnvziqewpwhhhkbagn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597831.1730762-216-217332887624543/AnsiballZ_file.py'
Dec 13 03:50:31 compute-0 sudo[119728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:32 compute-0 python3.9[119730]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:32 compute-0 ceph-mon[75071]: 9.0 scrub starts
Dec 13 03:50:32 compute-0 ceph-mon[75071]: 9.0 scrub ok
Dec 13 03:50:32 compute-0 ceph-mon[75071]: pgmap v320: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:32 compute-0 sudo[119728]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:32 compute-0 sudo[119880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipchtnxoreyjhzvxuuwwhntmsnoxuwwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597832.428007-231-106136813183255/AnsiballZ_timezone.py'
Dec 13 03:50:32 compute-0 sudo[119880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:32 compute-0 python3.9[119882]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec 13 03:50:33 compute-0 systemd[1]: Starting Time & Date Service...
Dec 13 03:50:33 compute-0 systemd[1]: Started Time & Date Service.
Dec 13 03:50:33 compute-0 sudo[119880]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:33 compute-0 ceph-mon[75071]: 9.a scrub starts
Dec 13 03:50:33 compute-0 ceph-mon[75071]: 9.a scrub ok
Dec 13 03:50:33 compute-0 sudo[120036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaslctghywvcczmilogqdfslegwjoscj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597833.3512945-240-75290453614913/AnsiballZ_file.py'
Dec 13 03:50:33 compute-0 sudo[120036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:33 compute-0 python3.9[120038]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:33 compute-0 sudo[120036]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:34 compute-0 ceph-mon[75071]: pgmap v321: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:34 compute-0 sudo[120188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdsiuvxvfzthosfqbrmwzankkmcgbeev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597833.9910827-248-242397404333426/AnsiballZ_stat.py'
Dec 13 03:50:34 compute-0 sudo[120188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:34 compute-0 python3.9[120190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:34 compute-0 sudo[120188]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:34 compute-0 sudo[120266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slfanihilhguchlbrmsbskouvrikxftu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597833.9910827-248-242397404333426/AnsiballZ_file.py'
Dec 13 03:50:34 compute-0 sudo[120266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:34 compute-0 python3.9[120268]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:34 compute-0 sudo[120266]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:34 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Dec 13 03:50:34 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Dec 13 03:50:35 compute-0 sudo[120418]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puekdyhyjkdcbfxjtnwfeiywcpmrgctp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597835.0146527-260-246651652462342/AnsiballZ_stat.py'
Dec 13 03:50:35 compute-0 sudo[120418]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:35 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Dec 13 03:50:35 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Dec 13 03:50:35 compute-0 python3.9[120420]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:35 compute-0 sudo[120418]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:35 compute-0 sudo[120496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdelrygatokoibkuvsbxdukbemsoaiux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597835.0146527-260-246651652462342/AnsiballZ_file.py'
Dec 13 03:50:35 compute-0 sudo[120496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:36 compute-0 python3.9[120498]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.s_5jubtk recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:36 compute-0 sudo[120496]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:36 compute-0 ceph-mon[75071]: pgmap v322: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:36 compute-0 ceph-mon[75071]: 9.1 scrub starts
Dec 13 03:50:36 compute-0 ceph-mon[75071]: 9.1 scrub ok
Dec 13 03:50:36 compute-0 sudo[120648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkluryykiybpjcpxcmhgwlymzizxslxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597836.2130759-272-185908733394021/AnsiballZ_stat.py'
Dec 13 03:50:36 compute-0 sudo[120648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:36 compute-0 python3.9[120650]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:36 compute-0 sudo[120648]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:36 compute-0 sudo[120726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfeuednnddatrkkgxmnwfywtoxvwelxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597836.2130759-272-185908733394021/AnsiballZ_file.py'
Dec 13 03:50:36 compute-0 sudo[120726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:37 compute-0 python3.9[120728]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:37 compute-0 sudo[120726]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:37 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Dec 13 03:50:37 compute-0 ceph-mon[75071]: 9.4 scrub starts
Dec 13 03:50:37 compute-0 ceph-mon[75071]: 9.4 scrub ok
Dec 13 03:50:37 compute-0 ceph-mon[75071]: pgmap v323: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:37 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Dec 13 03:50:37 compute-0 sudo[120878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wowsdhmwbzjtcucdpnydzmbcavassidf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597837.2690573-285-278432087788209/AnsiballZ_command.py'
Dec 13 03:50:37 compute-0 sudo[120878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:37 compute-0 python3.9[120880]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:50:37 compute-0 sudo[120878]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:37 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Dec 13 03:50:38 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Dec 13 03:50:38 compute-0 ceph-mon[75071]: 9.1a scrub starts
Dec 13 03:50:38 compute-0 ceph-mon[75071]: 9.1a scrub ok
Dec 13 03:50:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Dec 13 03:50:38 compute-0 ceph-osd[86683]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Dec 13 03:50:38 compute-0 sudo[121031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlodljivafdzdyghddxnjfgluhvhwukd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597838.0445795-293-74251717786275/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 03:50:38 compute-0 sudo[121031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:38 compute-0 python3[121033]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 03:50:38 compute-0 sudo[121031]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:39 compute-0 sudo[121183]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxjckczosomjirvpfuhznpxzmntdseja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597838.8386874-301-137764452660603/AnsiballZ_stat.py'
Dec 13 03:50:39 compute-0 sudo[121183]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:39 compute-0 python3.9[121185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:39 compute-0 sudo[121183]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:39 compute-0 ceph-mon[75071]: 9.3 scrub starts
Dec 13 03:50:39 compute-0 ceph-mon[75071]: 9.3 scrub ok
Dec 13 03:50:39 compute-0 ceph-mon[75071]: 9.1f scrub starts
Dec 13 03:50:39 compute-0 ceph-mon[75071]: 9.1f scrub ok
Dec 13 03:50:39 compute-0 ceph-mon[75071]: pgmap v324: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:39 compute-0 sudo[121261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osyfcvnuqswvfcooshfqrwbhzqsnvhpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597838.8386874-301-137764452660603/AnsiballZ_file.py'
Dec 13 03:50:39 compute-0 sudo[121261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:39 compute-0 python3.9[121263]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:39 compute-0 sudo[121261]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Dec 13 03:50:40 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Dec 13 03:50:40 compute-0 sudo[121413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhgnsqlwnfaomubikqtsebjtlmnsorbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597839.9675605-313-64635311416253/AnsiballZ_stat.py'
Dec 13 03:50:40 compute-0 sudo[121413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:50:40
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'images', 'default.rgw.meta', 'backups', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root']
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:50:40 compute-0 python3.9[121415]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:40 compute-0 sudo[121413]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:40 compute-0 sudo[121491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xazyzclzatablikxslhegxabnymopmcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597839.9675605-313-64635311416253/AnsiballZ_file.py'
Dec 13 03:50:40 compute-0 sudo[121491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:41 compute-0 python3.9[121493]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:41 compute-0 sudo[121491]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:41 compute-0 sudo[121643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtzkretotjtcjsdvkcnnvkhirzoyfwcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597841.1997252-325-155055122143757/AnsiballZ_stat.py'
Dec 13 03:50:41 compute-0 sudo[121643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:41 compute-0 python3.9[121645]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:41 compute-0 sudo[121643]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:41 compute-0 sudo[121721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkcuzxekdwftglhlludboolungccteyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597841.1997252-325-155055122143757/AnsiballZ_file.py'
Dec 13 03:50:41 compute-0 sudo[121721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:42 compute-0 python3.9[121723]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:42 compute-0 sudo[121721]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:50:42 compute-0 ceph-mon[75071]: 9.1d scrub starts
Dec 13 03:50:42 compute-0 ceph-mon[75071]: 9.1d scrub ok
Dec 13 03:50:42 compute-0 ceph-mon[75071]: pgmap v325: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:50:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:42 compute-0 sudo[121873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gipbxguuelcnfgmiwdduxksctdmwieyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597842.3452406-337-255040786439423/AnsiballZ_stat.py'
Dec 13 03:50:42 compute-0 sudo[121873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:42 compute-0 python3.9[121875]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:42 compute-0 sudo[121873]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:43 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Dec 13 03:50:43 compute-0 sudo[121951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-joygiqrgagtisgvuacrfguiwuxqrxujw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597842.3452406-337-255040786439423/AnsiballZ_file.py'
Dec 13 03:50:43 compute-0 sudo[121951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:43 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Dec 13 03:50:43 compute-0 python3.9[121953]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:43 compute-0 sudo[121951]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:43 compute-0 ceph-mon[75071]: pgmap v326: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:43 compute-0 sudo[122103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuzupqipidmvmbsmlwsjpehgzoatevyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597843.4759524-349-273423795296090/AnsiballZ_stat.py'
Dec 13 03:50:43 compute-0 sudo[122103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:44 compute-0 python3.9[122105]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:44 compute-0 sudo[122103]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:44 compute-0 sudo[122181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iurjbmbwkkdxtojjqbbcllomovoawmwq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597843.4759524-349-273423795296090/AnsiballZ_file.py'
Dec 13 03:50:44 compute-0 sudo[122181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:44 compute-0 ceph-mon[75071]: 9.1c scrub starts
Dec 13 03:50:44 compute-0 ceph-mon[75071]: 9.1c scrub ok
Dec 13 03:50:44 compute-0 python3.9[122183]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:44 compute-0 sudo[122181]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:45 compute-0 sudo[122333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytijaqinlhcwjfyuridchnvdfcxeiazm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597844.7696989-362-20654222508593/AnsiballZ_command.py'
Dec 13 03:50:45 compute-0 sudo[122333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:45 compute-0 python3.9[122335]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:50:45 compute-0 sudo[122333]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:45 compute-0 ceph-mon[75071]: pgmap v327: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:45 compute-0 sudo[122488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhyhrkjjuhggrjffwdhjzhnhmnlxfxfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597845.453923-370-163850018650095/AnsiballZ_blockinfile.py'
Dec 13 03:50:45 compute-0 sudo[122488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:46 compute-0 python3.9[122490]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:46 compute-0 sudo[122488]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:46 compute-0 sudo[122640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shiveyjjdrkkwsbsowrzjrmwigfdckhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597846.3783362-379-24692738510618/AnsiballZ_file.py'
Dec 13 03:50:46 compute-0 sudo[122640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:46 compute-0 python3.9[122642]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:46 compute-0 sudo[122640]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:47 compute-0 sudo[122792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbfvgmryapruymqakyfufkvfimgtbosn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597847.0379243-379-262632825208802/AnsiballZ_file.py'
Dec 13 03:50:47 compute-0 sudo[122792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:47 compute-0 ceph-mon[75071]: pgmap v328: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:47 compute-0 python3.9[122794]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:47 compute-0 sudo[122792]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Dec 13 03:50:48 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Dec 13 03:50:48 compute-0 sudo[122944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctxhkfqaulzhaketpwniwoacffhippbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597847.831099-394-265219475406866/AnsiballZ_mount.py'
Dec 13 03:50:48 compute-0 sudo[122944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:48 compute-0 python3.9[122946]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 03:50:48 compute-0 sudo[122944]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:49 compute-0 sudo[123096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvclkgexztswpanbotqmaavuksdbhom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597848.7088726-394-168319583044764/AnsiballZ_mount.py'
Dec 13 03:50:49 compute-0 sudo[123096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:49 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Dec 13 03:50:49 compute-0 ceph-osd[85653]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Dec 13 03:50:49 compute-0 python3.9[123098]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec 13 03:50:49 compute-0 sudo[123096]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:49 compute-0 ceph-mon[75071]: 9.1b scrub starts
Dec 13 03:50:49 compute-0 ceph-mon[75071]: 9.1b scrub ok
Dec 13 03:50:49 compute-0 sshd-session[115983]: Connection closed by 192.168.122.30 port 39912
Dec 13 03:50:49 compute-0 sshd-session[115980]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:50:49 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Dec 13 03:50:49 compute-0 systemd[1]: session-40.scope: Consumed 28.592s CPU time.
Dec 13 03:50:49 compute-0 systemd-logind[796]: Session 40 logged out. Waiting for processes to exit.
Dec 13 03:50:49 compute-0 systemd-logind[796]: Removed session 40.
Dec 13 03:50:50 compute-0 ceph-mon[75071]: pgmap v329: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:50 compute-0 ceph-mon[75071]: 9.1e scrub starts
Dec 13 03:50:50 compute-0 ceph-mon[75071]: 9.1e scrub ok
Dec 13 03:50:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:51 compute-0 ceph-mon[75071]: pgmap v330: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:50:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:53 compute-0 ceph-mon[75071]: pgmap v331: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:54 compute-0 sshd-session[123123]: Accepted publickey for zuul from 192.168.122.30 port 32800 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:50:54 compute-0 systemd-logind[796]: New session 41 of user zuul.
Dec 13 03:50:54 compute-0 systemd[1]: Started Session 41 of User zuul.
Dec 13 03:50:54 compute-0 sshd-session[123123]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:50:55 compute-0 sudo[123276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fosssfjvhcrcqiqvcclllbufhuycojor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597854.7765806-16-182708832729501/AnsiballZ_tempfile.py'
Dec 13 03:50:55 compute-0 sudo[123276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:55 compute-0 python3.9[123278]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec 13 03:50:55 compute-0 sudo[123276]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:55 compute-0 ceph-mon[75071]: pgmap v332: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:55 compute-0 sudo[123428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbuqkbowwqrxhdqpvwfidiylxqftlkbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597855.5637238-28-273007434974296/AnsiballZ_stat.py'
Dec 13 03:50:55 compute-0 sudo[123428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:56 compute-0 python3.9[123430]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:50:56 compute-0 sudo[123428]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:56 compute-0 sudo[123582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdzjordmrrgvppevtotzgevnysypkfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597856.359827-36-210951831857848/AnsiballZ_slurp.py'
Dec 13 03:50:56 compute-0 sudo[123582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:56 compute-0 python3.9[123584]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Dec 13 03:50:56 compute-0 sudo[123582]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:57 compute-0 sudo[123734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umcofpbjxnphgzyqcbxfklzzvwidszlj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597857.0972714-44-151352556261948/AnsiballZ_stat.py'
Dec 13 03:50:57 compute-0 sudo[123734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:57 compute-0 python3.9[123736]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.vb4q36wm follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:50:57 compute-0 sudo[123734]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:57 compute-0 ceph-mon[75071]: pgmap v333: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:50:58 compute-0 sudo[123859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtrviyuellysemvhwvzhpegxgpdjsswk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597857.0972714-44-151352556261948/AnsiballZ_copy.py'
Dec 13 03:50:58 compute-0 sudo[123859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:58 compute-0 python3.9[123861]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.vb4q36wm mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597857.0972714-44-151352556261948/.source.vb4q36wm _original_basename=.8iqax_vh follow=False checksum=d57b5d11f7aa6eb0512dcf06a4454db2ef400f6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:50:58 compute-0 sudo[123859]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:58 compute-0 sudo[124011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmoqrjsyzrsggpcxskpymaajwrbuizps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597858.394247-59-227445163274984/AnsiballZ_setup.py'
Dec 13 03:50:58 compute-0 sudo[124011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:50:59 compute-0 python3.9[124013]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:50:59 compute-0 sudo[124011]: pam_unix(sudo:session): session closed for user root
Dec 13 03:50:59 compute-0 ceph-mon[75071]: pgmap v334: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:50:59 compute-0 sudo[124163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzcvsdygfejtlcflignhvodjnwedszes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597859.464132-68-276669935573779/AnsiballZ_blockinfile.py'
Dec 13 03:50:59 compute-0 sudo[124163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:00 compute-0 python3.9[124165]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDM/+zIL2NxrneOl4Ijgq1+mjiuRtRxLtOkkDku84YD4MtT5b3Hv5A+8RLBrh4F6PSOlFLMaMJbg010cXPCia2cNL3B+w2lb9EpFUPmqo6cO/RdnYjC0YotqgtrcZGAIyZObG86oA2gAEQ+edZkVVp/nvGm3bPzxWvDHGwFNwtWysCsVfc2u/Ao1VjOGyOXGP450w5o4x9hvpuD6vd1RGLXZsEAB9iaxHFgK4lCHChwRWO6VEE55cKPWu5YjR/N/dJvYWLfVbSoC5PtGtR1wjnY+aO6DlyGCH8jqTzF3h40fxxrAu+sfgylKKH0sOvkugKQzuldVD/Q3mL4XQyLi/EhlvyMHqUT2xr0aiVQRWb6McFHFWo1ruUymYJPXl48xm7xCXHtjWajMoO0g0gRIFHeHRjmYQs/itOmfNlBOvZiYo9XTT40rvjzmvQvVUCRJ8Fq+YSWjD7kq9XHwwloPIltStmIYYpicOD3OVaQpChBpQGX6aBm4CkxT9r0ayHsQFU=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGd4yz03E7KSx67rSJ86GvOZAiazoRraK5NP1md10Q9d
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO/gGRW3gLjCVSzpBzXp92wVVIBeqLmRu0H1xxYCUcL6WRbi/C7ipdRUo9/dUYAhMEzG1NJxKRcw2OgECOr1/mc=
                                              create=True mode=0644 path=/tmp/ansible.vb4q36wm state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:00 compute-0 sudo[124163]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:00 compute-0 sudo[124315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zamgntxwkqenlyrthvvlikngofxarkwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597860.2053173-76-166005506030162/AnsiballZ_command.py'
Dec 13 03:51:00 compute-0 sudo[124315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:00 compute-0 python3.9[124317]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.vb4q36wm' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:51:00 compute-0 sudo[124315]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:01 compute-0 sudo[124469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgknicxikogjwmcwtjjnkpawjoncyjxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597860.9984794-84-140151465118562/AnsiballZ_file.py'
Dec 13 03:51:01 compute-0 sudo[124469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:01 compute-0 python3.9[124471]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.vb4q36wm state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:01 compute-0 sudo[124469]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:01 compute-0 ceph-mon[75071]: pgmap v335: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:01 compute-0 sshd-session[123126]: Connection closed by 192.168.122.30 port 32800
Dec 13 03:51:01 compute-0 sshd-session[123123]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:51:01 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Dec 13 03:51:01 compute-0 systemd[1]: session-41.scope: Consumed 4.787s CPU time.
Dec 13 03:51:01 compute-0 systemd-logind[796]: Session 41 logged out. Waiting for processes to exit.
Dec 13 03:51:01 compute-0 systemd-logind[796]: Removed session 41.
Dec 13 03:51:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:03 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec 13 03:51:03 compute-0 ceph-mon[75071]: pgmap v336: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:05 compute-0 ceph-mon[75071]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:06 compute-0 sshd-session[124498]: Accepted publickey for zuul from 192.168.122.30 port 34332 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:51:06 compute-0 systemd-logind[796]: New session 42 of user zuul.
Dec 13 03:51:06 compute-0 systemd[1]: Started Session 42 of User zuul.
Dec 13 03:51:06 compute-0 sshd-session[124498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:51:07 compute-0 python3.9[124651]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:51:07 compute-0 ceph-mon[75071]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:08 compute-0 sudo[124805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqpqxmwlnfornpsrsdxusjwhifoumnrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597868.003001-32-139536078901778/AnsiballZ_systemd.py'
Dec 13 03:51:08 compute-0 sudo[124805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:08 compute-0 python3.9[124807]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 03:51:08 compute-0 sudo[124805]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:09 compute-0 sudo[124959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bscbskdvztibihvwwxrirkpkjhpjvqog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597869.0954847-40-160451775903362/AnsiballZ_systemd.py'
Dec 13 03:51:09 compute-0 sudo[124959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:09 compute-0 python3.9[124961]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:51:09 compute-0 sudo[124962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:51:09 compute-0 sudo[124962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:09 compute-0 sudo[124962]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:09 compute-0 sudo[124959]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:09 compute-0 sudo[124988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:51:09 compute-0 sudo[124988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:10 compute-0 sudo[124988]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:51:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:51:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:51:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:51:10 compute-0 sudo[125193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnsraisuoipccmaztfszxzupwaxpsljj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597869.8454943-49-99022310108150/AnsiballZ_command.py'
Dec 13 03:51:10 compute-0 sudo[125193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:10 compute-0 ceph-mon[75071]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:10 compute-0 python3.9[125195]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:51:10 compute-0 sudo[125193]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:11 compute-0 sudo[125346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dergzomoihupqebmuaagkbbcqzwedmkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597870.6376522-57-175558923204971/AnsiballZ_stat.py'
Dec 13 03:51:11 compute-0 sudo[125346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:11 compute-0 python3.9[125348]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:51:11 compute-0 sudo[125346]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:11 compute-0 sudo[125498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvfgkqmwbefskrtkbqlyehjnsxobgjoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597871.4173727-66-137572191426233/AnsiballZ_file.py'
Dec 13 03:51:11 compute-0 sudo[125498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:51:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:51:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:51:11 compute-0 ceph-mon[75071]: pgmap v340: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:51:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:51:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:51:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:11 compute-0 sudo[125501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:51:11 compute-0 sudo[125501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:11 compute-0 sudo[125501]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:12 compute-0 sudo[125526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:51:12 compute-0 sudo[125526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:12 compute-0 python3.9[125500]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:12 compute-0 sudo[125498]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.382080788 +0000 UTC m=+0.046330664 container create d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:12 compute-0 systemd[1]: Started libpod-conmon-d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227.scope.
Dec 13 03:51:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.359410969 +0000 UTC m=+0.023660865 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:12 compute-0 sshd-session[124501]: Connection closed by 192.168.122.30 port 34332
Dec 13 03:51:12 compute-0 sshd-session[124498]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:51:12 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Dec 13 03:51:12 compute-0 systemd[1]: session-42.scope: Consumed 3.751s CPU time.
Dec 13 03:51:12 compute-0 systemd-logind[796]: Session 42 logged out. Waiting for processes to exit.
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.469206091 +0000 UTC m=+0.133455987 container init d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:51:12 compute-0 systemd-logind[796]: Removed session 42.
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.477567274 +0000 UTC m=+0.141817150 container start d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.482086199 +0000 UTC m=+0.146336075 container attach d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:12 compute-0 priceless_lamport[125603]: 167 167
Dec 13 03:51:12 compute-0 systemd[1]: libpod-d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227.scope: Deactivated successfully.
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.485251859 +0000 UTC m=+0.149501735 container died d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-29c41a24525fc614b6aec7f5441b91b85f31b8ab9bfe8ab484b45540629dbf4c-merged.mount: Deactivated successfully.
Dec 13 03:51:12 compute-0 podman[125587]: 2025-12-13 03:51:12.532826063 +0000 UTC m=+0.197075939 container remove d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lamport, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:51:12 compute-0 systemd[1]: libpod-conmon-d3855e5d7cfd6d44891e58b64c65074184a12e531abf723b9533e176b7e0f227.scope: Deactivated successfully.
Dec 13 03:51:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:12 compute-0 podman[125627]: 2025-12-13 03:51:12.690246341 +0000 UTC m=+0.025071731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:12 compute-0 podman[125627]: 2025-12-13 03:51:12.83951511 +0000 UTC m=+0.174340470 container create f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:51:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:51:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:51:12 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:51:12 compute-0 systemd[1]: Started libpod-conmon-f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1.scope.
Dec 13 03:51:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:13 compute-0 podman[125627]: 2025-12-13 03:51:13.005218678 +0000 UTC m=+0.340044048 container init f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:13 compute-0 podman[125627]: 2025-12-13 03:51:13.013862338 +0000 UTC m=+0.348687698 container start f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:13 compute-0 podman[125627]: 2025-12-13 03:51:13.017160782 +0000 UTC m=+0.351986162 container attach f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:51:13 compute-0 loving_curie[125644]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:51:13 compute-0 loving_curie[125644]: --> All data devices are unavailable
Dec 13 03:51:13 compute-0 systemd[1]: libpod-f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1.scope: Deactivated successfully.
Dec 13 03:51:13 compute-0 podman[125627]: 2025-12-13 03:51:13.526694174 +0000 UTC m=+0.861519544 container died f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff7955c227f28ebf114cbabe72aac1aaa0883d2996c43994c6e4ef62d194787e-merged.mount: Deactivated successfully.
Dec 13 03:51:13 compute-0 podman[125627]: 2025-12-13 03:51:13.602074108 +0000 UTC m=+0.936899468 container remove f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_curie, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:51:13 compute-0 systemd[1]: libpod-conmon-f7242f9b9932a5991305a5d84c1fdfd11d6760a65b262f2864b5fc5ba17274a1.scope: Deactivated successfully.
Dec 13 03:51:13 compute-0 sudo[125526]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:13 compute-0 sudo[125676]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:51:13 compute-0 sudo[125676]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:13 compute-0 sudo[125676]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:13 compute-0 sudo[125701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:51:13 compute-0 sudo[125701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:13 compute-0 ceph-mon[75071]: pgmap v341: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.093831776 +0000 UTC m=+0.063725087 container create 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.057660403 +0000 UTC m=+0.027553744 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:14 compute-0 systemd[1]: Started libpod-conmon-313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae.scope.
Dec 13 03:51:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.272780793 +0000 UTC m=+0.242674144 container init 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.279599826 +0000 UTC m=+0.249493137 container start 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 03:51:14 compute-0 kind_rubin[125753]: 167 167
Dec 13 03:51:14 compute-0 systemd[1]: libpod-313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae.scope: Deactivated successfully.
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.354487237 +0000 UTC m=+0.324380578 container attach 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.35499502 +0000 UTC m=+0.324888341 container died 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 03:51:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-b01c9b71bb695bb118644983649db5ff39ef7a770dd2edb4719d4fea28b5d658-merged.mount: Deactivated successfully.
Dec 13 03:51:14 compute-0 podman[125737]: 2025-12-13 03:51:14.419529067 +0000 UTC m=+0.389422388 container remove 313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_rubin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:51:14 compute-0 systemd[1]: libpod-conmon-313e1fa5048c1414c17bff32e1f6374ce2a5c4b7c859b8b3cb84c7efefe6b9ae.scope: Deactivated successfully.
Dec 13 03:51:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:14 compute-0 podman[125779]: 2025-12-13 03:51:14.574596673 +0000 UTC m=+0.044808344 container create 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:14 compute-0 podman[125779]: 2025-12-13 03:51:14.555398603 +0000 UTC m=+0.025610304 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:14 compute-0 systemd[1]: Started libpod-conmon-52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e.scope.
Dec 13 03:51:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b6ee6cfe37ee7585ca424a2efc4dc0239118df6b7a055502b7c1077b85a70c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b6ee6cfe37ee7585ca424a2efc4dc0239118df6b7a055502b7c1077b85a70c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b6ee6cfe37ee7585ca424a2efc4dc0239118df6b7a055502b7c1077b85a70c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08b6ee6cfe37ee7585ca424a2efc4dc0239118df6b7a055502b7c1077b85a70c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:14 compute-0 podman[125779]: 2025-12-13 03:51:14.727813473 +0000 UTC m=+0.198025174 container init 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:14 compute-0 podman[125779]: 2025-12-13 03:51:14.736173076 +0000 UTC m=+0.206384747 container start 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:51:14 compute-0 podman[125779]: 2025-12-13 03:51:14.84840735 +0000 UTC m=+0.318619031 container attach 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]: {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     "0": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "devices": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "/dev/loop3"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             ],
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_name": "ceph_lv0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_size": "21470642176",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "name": "ceph_lv0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "tags": {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_name": "ceph",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.crush_device_class": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.encrypted": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.objectstore": "bluestore",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_id": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.vdo": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.with_tpm": "0"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             },
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "vg_name": "ceph_vg0"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         }
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     ],
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     "1": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "devices": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "/dev/loop4"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             ],
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_name": "ceph_lv1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_size": "21470642176",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "name": "ceph_lv1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "tags": {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_name": "ceph",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.crush_device_class": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.encrypted": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.objectstore": "bluestore",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_id": "1",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.vdo": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.with_tpm": "0"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             },
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "vg_name": "ceph_vg1"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         }
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     ],
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     "2": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "devices": [
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "/dev/loop5"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             ],
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_name": "ceph_lv2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_size": "21470642176",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "name": "ceph_lv2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "tags": {
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.cluster_name": "ceph",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.crush_device_class": "",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.encrypted": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.objectstore": "bluestore",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osd_id": "2",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.vdo": "0",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:                 "ceph.with_tpm": "0"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             },
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "type": "block",
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:             "vg_name": "ceph_vg2"
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:         }
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]:     ]
Dec 13 03:51:15 compute-0 goofy_hamilton[125795]: }
Dec 13 03:51:15 compute-0 systemd[1]: libpod-52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e.scope: Deactivated successfully.
Dec 13 03:51:15 compute-0 podman[125779]: 2025-12-13 03:51:15.045365216 +0000 UTC m=+0.515576877 container died 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-08b6ee6cfe37ee7585ca424a2efc4dc0239118df6b7a055502b7c1077b85a70c-merged.mount: Deactivated successfully.
Dec 13 03:51:15 compute-0 podman[125779]: 2025-12-13 03:51:15.088161098 +0000 UTC m=+0.558372769 container remove 52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:51:15 compute-0 systemd[1]: libpod-conmon-52dff0ff1b26896b43d45a1da0f029f0ac8d45ea0572c442f83f1d87ecef300e.scope: Deactivated successfully.
Dec 13 03:51:15 compute-0 sudo[125701]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:15 compute-0 sudo[125815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:51:15 compute-0 sudo[125815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:15 compute-0 sudo[125815]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:15 compute-0 sudo[125840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:51:15 compute-0 sudo[125840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.543327432 +0000 UTC m=+0.038675108 container create c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:51:15 compute-0 systemd[1]: Started libpod-conmon-c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2.scope.
Dec 13 03:51:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.525880137 +0000 UTC m=+0.021227813 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.633753779 +0000 UTC m=+0.129101455 container init c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.640672917 +0000 UTC m=+0.136020593 container start c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:51:15 compute-0 systemd[1]: libpod-c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2.scope: Deactivated successfully.
Dec 13 03:51:15 compute-0 distracted_hellman[125892]: 167 167
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.647196883 +0000 UTC m=+0.142544569 container attach c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.647742306 +0000 UTC m=+0.143089982 container died c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:51:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-73fd9eecd420f2971d319b771a661f301b27a06710b6a92fb49403e72f436025-merged.mount: Deactivated successfully.
Dec 13 03:51:15 compute-0 podman[125876]: 2025-12-13 03:51:15.715802924 +0000 UTC m=+0.211150610 container remove c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_hellman, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:51:15 compute-0 systemd[1]: libpod-conmon-c8b95838156eb1e619de8b702f6cfbb5b003b937e648eab10ce6977e4adec6e2.scope: Deactivated successfully.
Dec 13 03:51:15 compute-0 podman[125919]: 2025-12-13 03:51:15.870978963 +0000 UTC m=+0.037987931 container create 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:51:15 compute-0 systemd[1]: Started libpod-conmon-9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78.scope.
Dec 13 03:51:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d3cf616a14e9c5ca495fbdafc728e3447a84f14d2ea836fc16a448749b16d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d3cf616a14e9c5ca495fbdafc728e3447a84f14d2ea836fc16a448749b16d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:15 compute-0 podman[125919]: 2025-12-13 03:51:15.852863541 +0000 UTC m=+0.019872539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d3cf616a14e9c5ca495fbdafc728e3447a84f14d2ea836fc16a448749b16d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d3cf616a14e9c5ca495fbdafc728e3447a84f14d2ea836fc16a448749b16d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:51:15 compute-0 podman[125919]: 2025-12-13 03:51:15.967971178 +0000 UTC m=+0.134980166 container init 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:51:15 compute-0 ceph-mon[75071]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:15 compute-0 podman[125919]: 2025-12-13 03:51:15.975973223 +0000 UTC m=+0.142982211 container start 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:51:15 compute-0 podman[125919]: 2025-12-13 03:51:15.979905173 +0000 UTC m=+0.146914171 container attach 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:51:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:16 compute-0 lvm[126015]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:51:16 compute-0 lvm[126016]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:51:16 compute-0 lvm[126016]: VG ceph_vg1 finished
Dec 13 03:51:16 compute-0 lvm[126015]: VG ceph_vg0 finished
Dec 13 03:51:16 compute-0 lvm[126018]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:51:16 compute-0 lvm[126018]: VG ceph_vg2 finished
Dec 13 03:51:16 compute-0 youthful_franklin[125936]: {}
Dec 13 03:51:16 compute-0 systemd[1]: libpod-9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78.scope: Deactivated successfully.
Dec 13 03:51:16 compute-0 systemd[1]: libpod-9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78.scope: Consumed 1.492s CPU time.
Dec 13 03:51:16 compute-0 podman[125919]: 2025-12-13 03:51:16.926761323 +0000 UTC m=+1.093770301 container died 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:51:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-97d3cf616a14e9c5ca495fbdafc728e3447a84f14d2ea836fc16a448749b16d7-merged.mount: Deactivated successfully.
Dec 13 03:51:16 compute-0 podman[125919]: 2025-12-13 03:51:16.970409387 +0000 UTC m=+1.137418355 container remove 9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_franklin, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:51:16 compute-0 systemd[1]: libpod-conmon-9ae4ac10d0bdafdfe50988b4b5ccb45a40a9e77180ed2341a956b582dd2ffa78.scope: Deactivated successfully.
Dec 13 03:51:17 compute-0 sudo[125840]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:51:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:51:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:17 compute-0 sudo[126033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:51:17 compute-0 sudo[126033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:51:17 compute-0 sudo[126033]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:17 compute-0 sshd-session[126058]: Accepted publickey for zuul from 192.168.122.30 port 48336 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:51:17 compute-0 systemd-logind[796]: New session 43 of user zuul.
Dec 13 03:51:17 compute-0 systemd[1]: Started Session 43 of User zuul.
Dec 13 03:51:17 compute-0 sshd-session[126058]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:51:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:18 compute-0 ceph-mon[75071]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:51:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:18 compute-0 python3.9[126211]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:51:19 compute-0 sudo[126365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdtrxaalwymborseyepyvckjnhbwdxtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597879.1622198-34-198958736451155/AnsiballZ_setup.py'
Dec 13 03:51:19 compute-0 sudo[126365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:19 compute-0 python3.9[126367]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:51:20 compute-0 sudo[126365]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:20 compute-0 ceph-mon[75071]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:20 compute-0 sudo[126449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfvekstnivhnaxcbqtszebtrmdegysnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597879.1622198-34-198958736451155/AnsiballZ_dnf.py'
Dec 13 03:51:20 compute-0 sudo[126449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:20 compute-0 python3.9[126451]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec 13 03:51:22 compute-0 ceph-mon[75071]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:22 compute-0 sudo[126449]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:22 compute-0 python3.9[126602]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:51:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:24 compute-0 ceph-mon[75071]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:24 compute-0 python3.9[126753]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 03:51:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:25 compute-0 python3.9[126903]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:51:25 compute-0 python3.9[127053]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:51:26 compute-0 ceph-mon[75071]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:26 compute-0 sshd-session[126061]: Connection closed by 192.168.122.30 port 48336
Dec 13 03:51:26 compute-0 sshd-session[126058]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:51:26 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Dec 13 03:51:26 compute-0 systemd[1]: session-43.scope: Consumed 6.194s CPU time.
Dec 13 03:51:26 compute-0 systemd-logind[796]: Session 43 logged out. Waiting for processes to exit.
Dec 13 03:51:26 compute-0 systemd-logind[796]: Removed session 43.
Dec 13 03:51:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:28 compute-0 ceph-mon[75071]: pgmap v348: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:30 compute-0 ceph-mon[75071]: pgmap v349: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:30 compute-0 sshd-session[127078]: Accepted publickey for zuul from 192.168.122.30 port 59468 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:51:30 compute-0 systemd-logind[796]: New session 44 of user zuul.
Dec 13 03:51:30 compute-0 systemd[1]: Started Session 44 of User zuul.
Dec 13 03:51:30 compute-0 sshd-session[127078]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:51:31 compute-0 python3.9[127231]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:51:32 compute-0 ceph-mon[75071]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:33 compute-0 sudo[127385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxliquztihowkthbeotgaxmuxuqmzahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597893.0866802-50-220074403531946/AnsiballZ_file.py'
Dec 13 03:51:33 compute-0 sudo[127385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:33 compute-0 python3.9[127387]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:33 compute-0 sudo[127385]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:34 compute-0 sudo[127537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msrwtfxpouycaupxrzztblqlcwcpvflp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597893.8448007-50-193045863428282/AnsiballZ_file.py'
Dec 13 03:51:34 compute-0 sudo[127537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:34 compute-0 ceph-mon[75071]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:34 compute-0 python3.9[127539]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:34 compute-0 sudo[127537]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:34 compute-0 sudo[127689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lanjdjuqcpmoplirqqungkjvnzsebowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597894.5479076-65-60705725688324/AnsiballZ_stat.py'
Dec 13 03:51:34 compute-0 sudo[127689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:35 compute-0 python3.9[127691]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:35 compute-0 sudo[127689]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:35 compute-0 sudo[127812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dynqrmzymeiyruvlkraouykmblnpegbu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597894.5479076-65-60705725688324/AnsiballZ_copy.py'
Dec 13 03:51:35 compute-0 sudo[127812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:35 compute-0 python3.9[127814]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597894.5479076-65-60705725688324/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b6e2939425e7e1eb6fba824bf1eec762ed0ec21f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:36 compute-0 sudo[127812]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:36 compute-0 ceph-mon[75071]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:36 compute-0 sudo[127964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqzqvqznmahrgatcjpclgnbjppcafyqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597896.1549113-65-203790564257888/AnsiballZ_stat.py'
Dec 13 03:51:36 compute-0 sudo[127964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:36 compute-0 python3.9[127966]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:36 compute-0 sudo[127964]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:36 compute-0 sudo[128087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdyllgftzuxmgblyjgcdnoxhrqgeewyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597896.1549113-65-203790564257888/AnsiballZ_copy.py'
Dec 13 03:51:36 compute-0 sudo[128087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:37 compute-0 python3.9[128089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597896.1549113-65-203790564257888/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=0b4c8707c05e9d4cb518d7adfe2a7929409bd4d9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:37 compute-0 sudo[128087]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:37 compute-0 sshd-session[71249]: Received disconnect from 38.102.83.147 port 42276:11: disconnected by user
Dec 13 03:51:37 compute-0 sshd-session[71249]: Disconnected from user zuul 38.102.83.147 port 42276
Dec 13 03:51:37 compute-0 sshd-session[71246]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:51:37 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Dec 13 03:51:37 compute-0 systemd[1]: session-18.scope: Consumed 1min 58.467s CPU time.
Dec 13 03:51:37 compute-0 systemd-logind[796]: Session 18 logged out. Waiting for processes to exit.
Dec 13 03:51:37 compute-0 systemd-logind[796]: Removed session 18.
Dec 13 03:51:37 compute-0 sudo[128239]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvrhxfqwwbmqgasogjfpodtsebxkexi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597897.3350315-65-258108601320508/AnsiballZ_stat.py'
Dec 13 03:51:37 compute-0 sudo[128239]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:37 compute-0 python3.9[128241]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:37 compute-0 sudo[128239]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:38 compute-0 sudo[128362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uchbsjsgmtblnmwiifrpauoetxqvjpvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597897.3350315-65-258108601320508/AnsiballZ_copy.py'
Dec 13 03:51:38 compute-0 sudo[128362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:38 compute-0 ceph-mon[75071]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:38 compute-0 python3.9[128364]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597897.3350315-65-258108601320508/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=72a6bde34d598c83dc3ec02a20b96e935070385c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:38 compute-0 sudo[128362]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:38 compute-0 sudo[128514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boegmzrlovyaangjhqanotophqnjepfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597898.5458724-109-86215408600575/AnsiballZ_file.py'
Dec 13 03:51:38 compute-0 sudo[128514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:39 compute-0 python3.9[128516]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:39 compute-0 sudo[128514]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:39 compute-0 sudo[128666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcrcftcqigambitknghitqrtfsfzolee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597899.1754863-109-141346138718061/AnsiballZ_file.py'
Dec 13 03:51:39 compute-0 sudo[128666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:39 compute-0 python3.9[128668]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:39 compute-0 sudo[128666]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:40 compute-0 sudo[128818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sajaputgpqffhlkqkkxgrvtkisbxxmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597899.8221674-124-1911905867404/AnsiballZ_stat.py'
Dec 13 03:51:40 compute-0 sudo[128818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:40 compute-0 ceph-mon[75071]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:40 compute-0 python3.9[128820]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:40 compute-0 sudo[128818]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:51:40
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', '.mgr', 'volumes', 'backups', 'default.rgw.log', 'images']
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:51:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:40 compute-0 sudo[128941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odspdrcansbwgamzhmeqpveqqcxuvemd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597899.8221674-124-1911905867404/AnsiballZ_copy.py'
Dec 13 03:51:40 compute-0 sudo[128941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:40 compute-0 python3.9[128943]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597899.8221674-124-1911905867404/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d12802a388e197a3e42602f5093ea988cb626fe6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:40 compute-0 sudo[128941]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:41 compute-0 sudo[129093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjunaczrptuovcymoefbnpldhcmfluvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597901.012874-124-173951943132846/AnsiballZ_stat.py'
Dec 13 03:51:41 compute-0 sudo[129093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:41 compute-0 python3.9[129095]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:41 compute-0 sudo[129093]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:41 compute-0 sudo[129216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvqaygsqccrbkaesoyrjduaspegzeefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597901.012874-124-173951943132846/AnsiballZ_copy.py'
Dec 13 03:51:41 compute-0 sudo[129216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:42 compute-0 python3.9[129218]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597901.012874-124-173951943132846/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1538c47735ce588b22fb7344a2d191e50fdf03db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:42 compute-0 sudo[129216]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:42 compute-0 ceph-mon[75071]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:51:42 compute-0 sudo[129368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxkcqjqihipxxqzdojtxvgrzticftdui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597902.2214847-124-100125174060258/AnsiballZ_stat.py'
Dec 13 03:51:42 compute-0 sudo[129368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:42 compute-0 python3.9[129370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:42 compute-0 sudo[129368]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:43 compute-0 sudo[129491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqybxguloozgjgnxsgeukmwqwltnsuwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597902.2214847-124-100125174060258/AnsiballZ_copy.py'
Dec 13 03:51:43 compute-0 sudo[129491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:43 compute-0 python3.9[129493]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597902.2214847-124-100125174060258/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=dd84d9280df73897633dfd97f86b71f44c79bfaf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:43 compute-0 sudo[129491]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:43 compute-0 sudo[129643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlyigopuzrldxxjaiiwteghwgodbkmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597903.4677892-168-2553770910282/AnsiballZ_file.py'
Dec 13 03:51:43 compute-0 sudo[129643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:43 compute-0 python3.9[129645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:43 compute-0 sudo[129643]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:44 compute-0 ceph-mon[75071]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:44 compute-0 sudo[129795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgksqgycjkhwspiwzkehcyoeqeyffypo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597904.0828927-168-191463065036993/AnsiballZ_file.py'
Dec 13 03:51:44 compute-0 sudo[129795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:44 compute-0 python3.9[129797]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:44 compute-0 sudo[129795]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:45 compute-0 sudo[129947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhvcawlfsykbyudcwkdetjiparzehihg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597904.720983-183-136632886314835/AnsiballZ_stat.py'
Dec 13 03:51:45 compute-0 sudo[129947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:45 compute-0 python3.9[129949]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:45 compute-0 sudo[129947]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:45 compute-0 sudo[130070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifypqwtdjltggtawnyhdbvfarkkgpkmg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597904.720983-183-136632886314835/AnsiballZ_copy.py'
Dec 13 03:51:45 compute-0 sudo[130070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:45 compute-0 python3.9[130072]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597904.720983-183-136632886314835/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=07bcdb04c91fa5f029d3cd9817c3e1904a60a183 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:45 compute-0 sudo[130070]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:46 compute-0 sudo[130222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlbvymmgzgwcdenuwthgnoisjlaubbrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597905.9671059-183-11373183629533/AnsiballZ_stat.py'
Dec 13 03:51:46 compute-0 sudo[130222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:46 compute-0 ceph-mon[75071]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:46 compute-0 python3.9[130224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:46 compute-0 sudo[130222]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:46 compute-0 sudo[130345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaqvhjplbcejkhogtknijojcqpneqtzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597905.9671059-183-11373183629533/AnsiballZ_copy.py'
Dec 13 03:51:46 compute-0 sudo[130345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:46 compute-0 python3.9[130347]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597905.9671059-183-11373183629533/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1538c47735ce588b22fb7344a2d191e50fdf03db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:46 compute-0 sudo[130345]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:47 compute-0 sudo[130497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnmaprwhgtdhmvrwjqkjegalrfmdodin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597907.0830944-183-95181232149964/AnsiballZ_stat.py'
Dec 13 03:51:47 compute-0 sudo[130497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:47 compute-0 ceph-mon[75071]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:47 compute-0 python3.9[130499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:47 compute-0 sudo[130497]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:47 compute-0 sudo[130620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihpepggefzuuafrioqilnpbohwekxsop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597907.0830944-183-95181232149964/AnsiballZ_copy.py'
Dec 13 03:51:47 compute-0 sudo[130620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:48 compute-0 python3.9[130622]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597907.0830944-183-95181232149964/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=733db397071ee8ae50633eb9b19294904fc8657c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:48 compute-0 sudo[130620]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:49 compute-0 sudo[130772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccnsojhqkfzfxjljbciydxbnukwixmvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597908.8100846-243-79283430143542/AnsiballZ_file.py'
Dec 13 03:51:49 compute-0 sudo[130772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:49 compute-0 python3.9[130774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:49 compute-0 sudo[130772]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:49 compute-0 sudo[130924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jycumqzkahzsgnxgnbbqxvlqguquiufb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597909.4474928-251-259991369051461/AnsiballZ_stat.py'
Dec 13 03:51:49 compute-0 sudo[130924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:49 compute-0 python3.9[130926]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:49 compute-0 sudo[130924]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:49 compute-0 ceph-mon[75071]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:50 compute-0 sudo[131047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnchgmhgwbnlfxqccrvdeqydbkwjfzfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597909.4474928-251-259991369051461/AnsiballZ_copy.py'
Dec 13 03:51:50 compute-0 sudo[131047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:50 compute-0 python3.9[131049]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597909.4474928-251-259991369051461/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:50 compute-0 sudo[131047]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:50 compute-0 sudo[131199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lulyhbdhiiimvjvvmsdvpbhghehgazwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597910.687214-267-217481850945383/AnsiballZ_file.py'
Dec 13 03:51:50 compute-0 sudo[131199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:51 compute-0 python3.9[131201]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:51 compute-0 sudo[131199]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:51 compute-0 ceph-mon[75071]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:51 compute-0 sudo[131351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efhatygypcgtxpdsfljvduwjikgmmudk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597911.3659277-275-128643462190943/AnsiballZ_stat.py'
Dec 13 03:51:51 compute-0 sudo[131351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:51 compute-0 python3.9[131353]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:51 compute-0 sudo[131351]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:51:52 compute-0 sudo[131474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-roodvzbvvcmkddmuqkdmhmfzzcqiikmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597911.3659277-275-128643462190943/AnsiballZ_copy.py'
Dec 13 03:51:52 compute-0 sudo[131474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:52 compute-0 python3.9[131476]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597911.3659277-275-128643462190943/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:52 compute-0 sudo[131474]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:52 compute-0 sudo[131626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsivfxdgouoqlcanoumfdrwfeqeodznv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597912.6259131-291-205828805174632/AnsiballZ_file.py'
Dec 13 03:51:52 compute-0 sudo[131626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:53 compute-0 python3.9[131628]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:53 compute-0 sudo[131626]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:53 compute-0 sudo[131778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmuyqjgmcgtfengrlxenhxznpmgzbycp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597913.2928817-299-225348317500864/AnsiballZ_stat.py'
Dec 13 03:51:53 compute-0 sudo[131778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:53 compute-0 ceph-mon[75071]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:53 compute-0 python3.9[131780]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:53 compute-0 sudo[131778]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:54 compute-0 sudo[131901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsqhcizufztbwpgmwboahjkogfzvptct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597913.2928817-299-225348317500864/AnsiballZ_copy.py'
Dec 13 03:51:54 compute-0 sudo[131901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:54 compute-0 python3.9[131903]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597913.2928817-299-225348317500864/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:54 compute-0 sudo[131901]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:54 compute-0 sudo[132053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sneffhrydhibeqtpadgylblmwlnlhxpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597914.5862072-315-273871091409953/AnsiballZ_file.py'
Dec 13 03:51:54 compute-0 sudo[132053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:55 compute-0 python3.9[132055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:55 compute-0 sudo[132053]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:55 compute-0 sudo[132205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uthqubzrqjqlvrmcvxqofnpekqnanzff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597915.3055925-323-233945802191762/AnsiballZ_stat.py'
Dec 13 03:51:55 compute-0 sudo[132205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:55 compute-0 ceph-mon[75071]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:55 compute-0 python3.9[132207]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:55 compute-0 sudo[132205]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:56 compute-0 sudo[132328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apsfytlnwpfbmullpvigyggrfflsvcok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597915.3055925-323-233945802191762/AnsiballZ_copy.py'
Dec 13 03:51:56 compute-0 sudo[132328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:56 compute-0 python3.9[132330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597915.3055925-323-233945802191762/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:56 compute-0 sudo[132328]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:56 compute-0 sudo[132480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsxllbszcbvatgybevcuwxgqupfnqsbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597916.6458921-339-106095308467265/AnsiballZ_file.py'
Dec 13 03:51:56 compute-0 sudo[132480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:57 compute-0 python3.9[132482]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:57 compute-0 sudo[132480]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:57 compute-0 sudo[132632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tgzlissopdfaalntuynecbbdqzibmqkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597917.3320396-347-167895147309757/AnsiballZ_stat.py'
Dec 13 03:51:57 compute-0 sudo[132632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:57 compute-0 ceph-mon[75071]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:57 compute-0 python3.9[132634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:51:57 compute-0 sudo[132632]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:51:58 compute-0 sudo[132755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltlnxysgfytcmbbyfjjqllealojkxcen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597917.3320396-347-167895147309757/AnsiballZ_copy.py'
Dec 13 03:51:58 compute-0 sudo[132755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:58 compute-0 python3.9[132757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597917.3320396-347-167895147309757/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:51:58 compute-0 sudo[132755]: pam_unix(sudo:session): session closed for user root
Dec 13 03:51:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:51:59 compute-0 sudo[132907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpgedaostxdxuaqxhehfxxuvcxllqejf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597919.0741856-363-132132498564289/AnsiballZ_file.py'
Dec 13 03:51:59 compute-0 sudo[132907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:51:59 compute-0 python3.9[132909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:51:59 compute-0 sudo[132907]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:00 compute-0 sudo[133059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhvovbeibuyyrcdinvaisswfhyuzpmpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597919.729065-371-29198744722065/AnsiballZ_stat.py'
Dec 13 03:52:00 compute-0 sudo[133059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:00 compute-0 ceph-mon[75071]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:00 compute-0 python3.9[133061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:00 compute-0 sudo[133059]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:00 compute-0 sudo[133182]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmxayjhxlzzqyqirjeylzuaqzgewfidt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597919.729065-371-29198744722065/AnsiballZ_copy.py'
Dec 13 03:52:00 compute-0 sudo[133182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:00 compute-0 python3.9[133184]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597919.729065-371-29198744722065/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=d21f9684b552829aaa1944df7a5cfc182bb12c99 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:00 compute-0 sudo[133182]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:01 compute-0 sshd-session[127081]: Connection closed by 192.168.122.30 port 59468
Dec 13 03:52:01 compute-0 sshd-session[127078]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:52:01 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Dec 13 03:52:01 compute-0 systemd-logind[796]: Session 44 logged out. Waiting for processes to exit.
Dec 13 03:52:01 compute-0 systemd[1]: session-44.scope: Consumed 23.464s CPU time.
Dec 13 03:52:01 compute-0 systemd-logind[796]: Removed session 44.
Dec 13 03:52:02 compute-0 ceph-mon[75071]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:04 compute-0 ceph-mon[75071]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:05 compute-0 ceph-mon[75071]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:06 compute-0 sshd-session[133209]: Accepted publickey for zuul from 192.168.122.30 port 34108 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:52:06 compute-0 systemd-logind[796]: New session 45 of user zuul.
Dec 13 03:52:06 compute-0 systemd[1]: Started Session 45 of User zuul.
Dec 13 03:52:06 compute-0 sshd-session[133209]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:52:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:07 compute-0 sudo[133362]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgqurqjjadizcwnxunziuzvkaomrjtwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597926.5098584-22-242099912147122/AnsiballZ_file.py'
Dec 13 03:52:07 compute-0 sudo[133362]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:07 compute-0 python3.9[133364]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:07 compute-0 sudo[133362]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:07 compute-0 ceph-mon[75071]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:07 compute-0 sudo[133514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anfkyjylrrrjeixetcnfnwpibiuzcuoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597927.390259-34-134920827249202/AnsiballZ_stat.py'
Dec 13 03:52:07 compute-0 sudo[133514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:07 compute-0 python3.9[133516]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:07 compute-0 sudo[133514]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:08 compute-0 sudo[133637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocawhfpvyxkuqtcuemrwgjsznikhfigd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597927.390259-34-134920827249202/AnsiballZ_copy.py'
Dec 13 03:52:08 compute-0 sudo[133637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:08 compute-0 python3.9[133639]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597927.390259-34-134920827249202/.source.conf _original_basename=ceph.conf follow=False checksum=d464a6a655a3718ab98f1a6543ca0c5d21a48f49 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:08 compute-0 sudo[133637]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:09 compute-0 sudo[133789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkhmeibocffupscmxnjeezzddrhygnkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597928.8517394-34-275378983194291/AnsiballZ_stat.py'
Dec 13 03:52:09 compute-0 sudo[133789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:09 compute-0 python3.9[133791]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:09 compute-0 sudo[133789]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:09 compute-0 sudo[133912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drbsvtooaeabgolfdoskjeqwefoixmug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597928.8517394-34-275378983194291/AnsiballZ_copy.py'
Dec 13 03:52:09 compute-0 sudo[133912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:09 compute-0 ceph-mon[75071]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:09 compute-0 python3.9[133914]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597928.8517394-34-275378983194291/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=8e64fb0469c4b53ef15183c0deae983e54273e57 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:09 compute-0 sudo[133912]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:10 compute-0 sshd-session[133212]: Connection closed by 192.168.122.30 port 34108
Dec 13 03:52:10 compute-0 sshd-session[133209]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:52:10 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Dec 13 03:52:10 compute-0 systemd[1]: session-45.scope: Consumed 2.717s CPU time.
Dec 13 03:52:10 compute-0 systemd-logind[796]: Session 45 logged out. Waiting for processes to exit.
Dec 13 03:52:10 compute-0 systemd-logind[796]: Removed session 45.
Dec 13 03:52:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:11 compute-0 ceph-mon[75071]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:13 compute-0 ceph-mon[75071]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:15 compute-0 sshd-session[133939]: Accepted publickey for zuul from 192.168.122.30 port 48434 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:52:15 compute-0 systemd-logind[796]: New session 46 of user zuul.
Dec 13 03:52:15 compute-0 systemd[1]: Started Session 46 of User zuul.
Dec 13 03:52:15 compute-0 sshd-session[133939]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:52:15 compute-0 ceph-mon[75071]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:16 compute-0 python3.9[134092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:52:17 compute-0 sudo[134146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:52:17 compute-0 sudo[134146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:17 compute-0 sudo[134146]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:17 compute-0 sudo[134198]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:52:17 compute-0 sudo[134198]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:17 compute-0 sudo[134308]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uspvhsedfwthzmgjclyiozumuehduuhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597937.1084785-34-139409891501644/AnsiballZ_file.py'
Dec 13 03:52:17 compute-0 sudo[134308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:17 compute-0 python3.9[134312]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:52:17 compute-0 sudo[134308]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:17 compute-0 sudo[134198]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:52:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:17 compute-0 sudo[134383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:52:17 compute-0 sudo[134383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:17 compute-0 sudo[134383]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:18 compute-0 ceph-mon[75071]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:52:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:52:18 compute-0 sudo[134436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:52:18 compute-0 sudo[134436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:18 compute-0 sudo[134529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtsncecvepwlrdfcjfkiepopxqnqsng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597937.8742177-34-215806490654489/AnsiballZ_file.py'
Dec 13 03:52:18 compute-0 sudo[134529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:18 compute-0 python3.9[134531]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.355458117 +0000 UTC m=+0.057444108 container create fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:52:18 compute-0 sudo[134529]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:18 compute-0 systemd[1]: Started libpod-conmon-fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02.scope.
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.324967505 +0000 UTC m=+0.026953526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.456156327 +0000 UTC m=+0.158142348 container init fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.463274323 +0000 UTC m=+0.165260314 container start fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.466699958 +0000 UTC m=+0.168685949 container attach fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:52:18 compute-0 optimistic_thompson[134561]: 167 167
Dec 13 03:52:18 compute-0 systemd[1]: libpod-fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02.scope: Deactivated successfully.
Dec 13 03:52:18 compute-0 conmon[134561]: conmon fde9293330580acab6a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02.scope/container/memory.events
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.472168099 +0000 UTC m=+0.174154120 container died fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:52:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0e09a08dd77b3937ea6499fcad71e4355041038dcedf3c3bfe473f2ccb8c49-merged.mount: Deactivated successfully.
Dec 13 03:52:18 compute-0 podman[134544]: 2025-12-13 03:52:18.511022491 +0000 UTC m=+0.213008482 container remove fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_thompson, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:52:18 compute-0 systemd[1]: libpod-conmon-fde9293330580acab6a836b91b014b909fb9005cab7c360212abf1340d3fbd02.scope: Deactivated successfully.
Dec 13 03:52:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:18 compute-0 podman[134658]: 2025-12-13 03:52:18.66490244 +0000 UTC m=+0.021438372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:19 compute-0 python3.9[134745]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:52:19 compute-0 podman[134658]: 2025-12-13 03:52:19.325111499 +0000 UTC m=+0.681647401 container create 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:52:19 compute-0 systemd[1]: Started libpod-conmon-96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470.scope.
Dec 13 03:52:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:19 compute-0 ceph-mon[75071]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:19 compute-0 podman[134658]: 2025-12-13 03:52:19.658205626 +0000 UTC m=+1.014741558 container init 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 03:52:19 compute-0 podman[134658]: 2025-12-13 03:52:19.669013824 +0000 UTC m=+1.025549726 container start 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:19 compute-0 podman[134658]: 2025-12-13 03:52:19.679180054 +0000 UTC m=+1.035715956 container attach 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:52:19 compute-0 sudo[134902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdekgynjmbgrwgbxxsbyguqpnjxvwelp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597939.3993695-57-15030095261789/AnsiballZ_seboolean.py'
Dec 13 03:52:19 compute-0 sudo[134902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:20 compute-0 python3.9[134904]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 03:52:20 compute-0 charming_williamson[134824]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:52:20 compute-0 charming_williamson[134824]: --> All data devices are unavailable
Dec 13 03:52:20 compute-0 systemd[1]: libpod-96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470.scope: Deactivated successfully.
Dec 13 03:52:20 compute-0 podman[134658]: 2025-12-13 03:52:20.27553289 +0000 UTC m=+1.632068792 container died 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:52:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-32746e6d24199db2127471a65f8daa4de2bea7f69f977cf13b1f5dffcf114a0c-merged.mount: Deactivated successfully.
Dec 13 03:52:20 compute-0 podman[134658]: 2025-12-13 03:52:20.331768902 +0000 UTC m=+1.688304804 container remove 96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_williamson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:52:20 compute-0 systemd[1]: libpod-conmon-96f31b320cfd2beb570a556c5405db1edfc9b87234f1a9454f36bd2746a43470.scope: Deactivated successfully.
Dec 13 03:52:20 compute-0 sudo[134436]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:20 compute-0 sudo[134932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:52:20 compute-0 sudo[134932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:20 compute-0 sudo[134932]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:20 compute-0 sudo[134957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:52:20 compute-0 sudo[134957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.832857418 +0000 UTC m=+0.058965410 container create feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:52:20 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec 13 03:52:20 compute-0 systemd[1]: Started libpod-conmon-feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287.scope.
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.794632843 +0000 UTC m=+0.020740845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.920886439 +0000 UTC m=+0.146994681 container init feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.929616299 +0000 UTC m=+0.155724291 container start feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:52:20 compute-0 wizardly_darwin[135009]: 167 167
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.9350671 +0000 UTC m=+0.161175092 container attach feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:52:20 compute-0 systemd[1]: libpod-feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287.scope: Deactivated successfully.
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.937759814 +0000 UTC m=+0.163867806 container died feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle)
Dec 13 03:52:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6798ea6e010d2282e9746b9aaf414d44714d7ffe1fc2d3e3d019fcdf88874a4-merged.mount: Deactivated successfully.
Dec 13 03:52:20 compute-0 podman[134992]: 2025-12-13 03:52:20.975852296 +0000 UTC m=+0.201960288 container remove feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_darwin, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 03:52:20 compute-0 systemd[1]: libpod-conmon-feaebc6d4b76b5bb8c63d38b92935c57826f635a6d2a8c3100e77c2b6521a287.scope: Deactivated successfully.
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.126473295 +0000 UTC m=+0.043763630 container create b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:52:21 compute-0 systemd[1]: Started libpod-conmon-b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee.scope.
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.106299038 +0000 UTC m=+0.023589403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f465a6b6b49c40aa50e9883f140e8e10d2d45551f73c8d2ece1e1bffa6891307/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f465a6b6b49c40aa50e9883f140e8e10d2d45551f73c8d2ece1e1bffa6891307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f465a6b6b49c40aa50e9883f140e8e10d2d45551f73c8d2ece1e1bffa6891307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f465a6b6b49c40aa50e9883f140e8e10d2d45551f73c8d2ece1e1bffa6891307/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.241783878 +0000 UTC m=+0.159074233 container init b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.248108922 +0000 UTC m=+0.165399257 container start b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.255224219 +0000 UTC m=+0.172514584 container attach b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:52:21 compute-0 sudo[134902]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:21 compute-0 adoring_carver[135052]: {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     "0": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "devices": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "/dev/loop3"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             ],
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_name": "ceph_lv0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_size": "21470642176",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "name": "ceph_lv0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "tags": {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_name": "ceph",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.crush_device_class": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.encrypted": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.objectstore": "bluestore",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_id": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.vdo": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.with_tpm": "0"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             },
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "vg_name": "ceph_vg0"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         }
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     ],
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     "1": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "devices": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "/dev/loop4"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             ],
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_name": "ceph_lv1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_size": "21470642176",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "name": "ceph_lv1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "tags": {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_name": "ceph",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.crush_device_class": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.encrypted": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.objectstore": "bluestore",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_id": "1",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.vdo": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.with_tpm": "0"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             },
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "vg_name": "ceph_vg1"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         }
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     ],
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     "2": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "devices": [
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "/dev/loop5"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             ],
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_name": "ceph_lv2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_size": "21470642176",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "name": "ceph_lv2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "tags": {
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.cluster_name": "ceph",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.crush_device_class": "",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.encrypted": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.objectstore": "bluestore",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osd_id": "2",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.vdo": "0",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:                 "ceph.with_tpm": "0"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             },
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "type": "block",
Dec 13 03:52:21 compute-0 adoring_carver[135052]:             "vg_name": "ceph_vg2"
Dec 13 03:52:21 compute-0 adoring_carver[135052]:         }
Dec 13 03:52:21 compute-0 adoring_carver[135052]:     ]
Dec 13 03:52:21 compute-0 adoring_carver[135052]: }
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.592339877 +0000 UTC m=+0.509630232 container died b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:52:21 compute-0 systemd[1]: libpod-b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee.scope: Deactivated successfully.
Dec 13 03:52:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-f465a6b6b49c40aa50e9883f140e8e10d2d45551f73c8d2ece1e1bffa6891307-merged.mount: Deactivated successfully.
Dec 13 03:52:21 compute-0 podman[135036]: 2025-12-13 03:52:21.647944252 +0000 UTC m=+0.565234577 container remove b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:52:21 compute-0 systemd[1]: libpod-conmon-b013061e98bf4ca18e0a845d1f0b84cee72203390ab6b287d77ba7905186e7ee.scope: Deactivated successfully.
Dec 13 03:52:21 compute-0 ceph-mon[75071]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:21 compute-0 sudo[134957]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:21 compute-0 sudo[135171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:52:21 compute-0 sudo[135171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:21 compute-0 sudo[135171]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:21 compute-0 sudo[135217]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:52:21 compute-0 sudo[135217]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:21 compute-0 sudo[135271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdqiyexfbdzcrxshcrhzsnpxlykfghdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597941.5761535-67-11743186283409/AnsiballZ_setup.py'
Dec 13 03:52:21 compute-0 sudo[135271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.163393504 +0000 UTC m=+0.046944728 container create 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:52:22 compute-0 python3.9[135273]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:52:22 compute-0 systemd[1]: Started libpod-conmon-769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c.scope.
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.141270113 +0000 UTC m=+0.024821357 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.256997869 +0000 UTC m=+0.140549113 container init 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.267835647 +0000 UTC m=+0.151386871 container start 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.272224289 +0000 UTC m=+0.155775513 container attach 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:52:22 compute-0 elastic_goldberg[135308]: 167 167
Dec 13 03:52:22 compute-0 systemd[1]: libpod-769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c.scope: Deactivated successfully.
Dec 13 03:52:22 compute-0 conmon[135308]: conmon 769fd610602722ccf331 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c.scope/container/memory.events
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.274480671 +0000 UTC m=+0.158031905 container died 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:52:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-073ca7c0d3e342b28fc70fc0d28d934fd16fe9bcbfa4c51adf137287d960f01e-merged.mount: Deactivated successfully.
Dec 13 03:52:22 compute-0 podman[135287]: 2025-12-13 03:52:22.318086965 +0000 UTC m=+0.201638189 container remove 769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 03:52:22 compute-0 systemd[1]: libpod-conmon-769fd610602722ccf331d4ff25f3be2649f6a93f40bf0bd2fe790a61282a2b6c.scope: Deactivated successfully.
Dec 13 03:52:22 compute-0 sudo[135271]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:22 compute-0 podman[135335]: 2025-12-13 03:52:22.478140905 +0000 UTC m=+0.023144231 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:52:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:22 compute-0 podman[135335]: 2025-12-13 03:52:22.62250867 +0000 UTC m=+0.167511976 container create 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:52:22 compute-0 systemd[1]: Started libpod-conmon-1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8.scope.
Dec 13 03:52:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953f3b70b182af6db1eea5a1f846c491435748729ccdbcb5d74be036eb074dea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953f3b70b182af6db1eea5a1f846c491435748729ccdbcb5d74be036eb074dea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953f3b70b182af6db1eea5a1f846c491435748729ccdbcb5d74be036eb074dea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/953f3b70b182af6db1eea5a1f846c491435748729ccdbcb5d74be036eb074dea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:52:22 compute-0 podman[135335]: 2025-12-13 03:52:22.695617649 +0000 UTC m=+0.240620975 container init 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:52:22 compute-0 podman[135335]: 2025-12-13 03:52:22.703867637 +0000 UTC m=+0.248870943 container start 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:52:22 compute-0 podman[135335]: 2025-12-13 03:52:22.707281631 +0000 UTC m=+0.252284937 container attach 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 03:52:22 compute-0 sudo[135429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfhhhmpcqfhfatydaoxkrrlyohunjoto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597941.5761535-67-11743186283409/AnsiballZ_dnf.py'
Dec 13 03:52:22 compute-0 sudo[135429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:23 compute-0 python3.9[135436]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:52:23 compute-0 lvm[135506]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:52:23 compute-0 lvm[135507]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:52:23 compute-0 lvm[135507]: VG ceph_vg1 finished
Dec 13 03:52:23 compute-0 lvm[135506]: VG ceph_vg0 finished
Dec 13 03:52:23 compute-0 lvm[135509]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:52:23 compute-0 lvm[135509]: VG ceph_vg2 finished
Dec 13 03:52:23 compute-0 vibrant_jennings[135351]: {}
Dec 13 03:52:23 compute-0 systemd[1]: libpod-1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8.scope: Deactivated successfully.
Dec 13 03:52:23 compute-0 podman[135335]: 2025-12-13 03:52:23.58664204 +0000 UTC m=+1.131645346 container died 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 03:52:23 compute-0 systemd[1]: libpod-1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8.scope: Consumed 1.514s CPU time.
Dec 13 03:52:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-953f3b70b182af6db1eea5a1f846c491435748729ccdbcb5d74be036eb074dea-merged.mount: Deactivated successfully.
Dec 13 03:52:23 compute-0 podman[135335]: 2025-12-13 03:52:23.637900336 +0000 UTC m=+1.182903642 container remove 1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_jennings, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:52:23 compute-0 systemd[1]: libpod-conmon-1867821eb74de9685ea28c88f6c74c2e3b2b25f895ceaf3a99204657189907a8.scope: Deactivated successfully.
Dec 13 03:52:23 compute-0 ceph-mon[75071]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:23 compute-0 sudo[135217]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:52:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:52:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:23 compute-0 sudo[135526]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:52:23 compute-0 sudo[135526]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:52:23 compute-0 sudo[135526]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:24 compute-0 sudo[135429]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:52:25 compute-0 sudo[135700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkkfwuvdepsjdnfsolnvbttgsrqtdvbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597944.817851-79-22008342870898/AnsiballZ_systemd.py'
Dec 13 03:52:25 compute-0 sudo[135700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:25 compute-0 ceph-mon[75071]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:25 compute-0 python3.9[135702]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:52:25 compute-0 sudo[135700]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:26 compute-0 sudo[135855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfprsfiwkiitklevdqjcmwvzyiutjnlv ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597946.1564176-87-28009313110433/AnsiballZ_edpm_nftables_snippet.py'
Dec 13 03:52:26 compute-0 sudo[135855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:26 compute-0 python3[135857]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec 13 03:52:26 compute-0 sudo[135855]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:27 compute-0 sudo[136007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsinhwkyttlkbkxxeuwmifmifmyrsumx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597947.0445845-96-210153165513953/AnsiballZ_file.py'
Dec 13 03:52:27 compute-0 sudo[136007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:27 compute-0 python3.9[136009]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:27 compute-0 sudo[136007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:27 compute-0 ceph-mon[75071]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:28 compute-0 sudo[136159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snrjsdddciziceftnvkhdsteqalafksp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597947.7102604-104-5210037070154/AnsiballZ_stat.py'
Dec 13 03:52:28 compute-0 sudo[136159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:28 compute-0 python3.9[136161]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:28 compute-0 sudo[136159]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:28 compute-0 sudo[136237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdwxwlfgkeosmmzpksywxuznjdepyvzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597947.7102604-104-5210037070154/AnsiballZ_file.py'
Dec 13 03:52:28 compute-0 sudo[136237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:28 compute-0 python3.9[136239]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:28 compute-0 sudo[136237]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:29 compute-0 sudo[136389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shturtzopmryjsdslscxsxctummblxly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597949.0353327-116-271396820549849/AnsiballZ_stat.py'
Dec 13 03:52:29 compute-0 sudo[136389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:29 compute-0 python3.9[136391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:29 compute-0 sudo[136389]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:29 compute-0 sudo[136467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwgqhuvenedtcdjwasmskhihnxnlhxoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597949.0353327-116-271396820549849/AnsiballZ_file.py'
Dec 13 03:52:29 compute-0 sudo[136467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:29 compute-0 ceph-mon[75071]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:29 compute-0 python3.9[136469]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.gjv25ua9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:30 compute-0 sudo[136467]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:30 compute-0 sudo[136619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxmpocadggidoyprfrwatpjmroxjrco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597950.186545-128-238821934837237/AnsiballZ_stat.py'
Dec 13 03:52:30 compute-0 sudo[136619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:30 compute-0 python3.9[136621]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:30 compute-0 sudo[136619]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:30 compute-0 sudo[136697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcrkdnsmtscxsukmwglwdizmyzmtghvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597950.186545-128-238821934837237/AnsiballZ_file.py'
Dec 13 03:52:30 compute-0 sudo[136697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:31 compute-0 python3.9[136699]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:31 compute-0 sudo[136697]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:31 compute-0 sudo[136849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwwfswieqvowzsxyuqnclqaidxqmdlhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597951.4108698-141-195448775339618/AnsiballZ_command.py'
Dec 13 03:52:31 compute-0 sudo[136849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:31 compute-0 ceph-mon[75071]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:32 compute-0 python3.9[136851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:32 compute-0 sudo[136849]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:32 compute-0 sudo[137002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urokcydrgzcgxenzztqssmdpuswvurpg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597952.3217487-149-251180809571342/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 03:52:32 compute-0 sudo[137002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:32 compute-0 python3[137004]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 03:52:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.007563) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:52:33 compute-0 sudo[137002]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953007669, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1534, "num_deletes": 251, "total_data_size": 2324071, "memory_usage": 2360240, "flush_reason": "Manual Compaction"}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953025121, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1361554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7439, "largest_seqno": 8972, "table_properties": {"data_size": 1356397, "index_size": 2297, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14219, "raw_average_key_size": 20, "raw_value_size": 1344545, "raw_average_value_size": 1923, "num_data_blocks": 109, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597804, "oldest_key_time": 1765597804, "file_creation_time": 1765597953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17624 microseconds, and 6202 cpu microseconds.
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.025201) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1361554 bytes OK
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.025227) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.027114) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.027137) EVENT_LOG_v1 {"time_micros": 1765597953027130, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.027158) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2317234, prev total WAL file size 2317234, number of live WAL files 2.
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.028098) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1329KB)], [20(7605KB)]
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953028181, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9149331, "oldest_snapshot_seqno": -1}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3393 keys, 7168676 bytes, temperature: kUnknown
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953094306, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7168676, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7142551, "index_size": 16543, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8517, "raw_key_size": 81369, "raw_average_key_size": 23, "raw_value_size": 7077800, "raw_average_value_size": 2086, "num_data_blocks": 732, "num_entries": 3393, "num_filter_entries": 3393, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765597953, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.094792) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7168676 bytes
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.098656) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.0 rd, 108.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.4 +0.0 blob) out(6.8 +0.0 blob), read-write-amplify(12.0) write-amplify(5.3) OK, records in: 3838, records dropped: 445 output_compression: NoCompression
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.098758) EVENT_LOG_v1 {"time_micros": 1765597953098704, "job": 6, "event": "compaction_finished", "compaction_time_micros": 66315, "compaction_time_cpu_micros": 24111, "output_level": 6, "num_output_files": 1, "total_output_size": 7168676, "num_input_records": 3838, "num_output_records": 3393, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953099627, "job": 6, "event": "table_file_deletion", "file_number": 22}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765597953101886, "job": 6, "event": "table_file_deletion", "file_number": 20}
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.027961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.101989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.101995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.101997) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.101999) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:52:33.102001) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:52:33 compute-0 sudo[137154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeujdgtrzkpmfplnnxrgyuulucdwgfpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597953.1789072-157-81645575056749/AnsiballZ_stat.py'
Dec 13 03:52:33 compute-0 sudo[137154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:33 compute-0 python3.9[137156]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:33 compute-0 sudo[137154]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:34 compute-0 ceph-mon[75071]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:34 compute-0 sudo[137279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ockkwvheblovujiunwrykwmupmyrvigm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597953.1789072-157-81645575056749/AnsiballZ_copy.py'
Dec 13 03:52:34 compute-0 sudo[137279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:34 compute-0 python3.9[137281]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597953.1789072-157-81645575056749/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:34 compute-0 sudo[137279]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:34 compute-0 sudo[137431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjbxrdbzdljzsgeoqihwumknhzksarfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597954.6422987-172-47660641928919/AnsiballZ_stat.py'
Dec 13 03:52:34 compute-0 sudo[137431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:35 compute-0 python3.9[137433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:35 compute-0 sudo[137431]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:35 compute-0 sudo[137556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pfnfhjwprjghlhkgyzhigotzbzvvoehl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597954.6422987-172-47660641928919/AnsiballZ_copy.py'
Dec 13 03:52:35 compute-0 sudo[137556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:36 compute-0 python3.9[137558]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597954.6422987-172-47660641928919/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:36 compute-0 sudo[137556]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:36 compute-0 ceph-mon[75071]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:36 compute-0 sudo[137708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qoiiskpwxduxahrzmvpdresxmptowkro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597956.3593187-187-147686075832120/AnsiballZ_stat.py'
Dec 13 03:52:36 compute-0 sudo[137708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:36 compute-0 python3.9[137710]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:36 compute-0 sudo[137708]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:37 compute-0 sudo[137833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cswbprrmoalnrthtbpuopstvpfpuxydb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597956.3593187-187-147686075832120/AnsiballZ_copy.py'
Dec 13 03:52:37 compute-0 sudo[137833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:37 compute-0 python3.9[137835]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597956.3593187-187-147686075832120/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:37 compute-0 sudo[137833]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:38 compute-0 sudo[137985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lodefhajqmyalkbwbtqfirfyszghugst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597957.710364-202-31786751559276/AnsiballZ_stat.py'
Dec 13 03:52:38 compute-0 sudo[137985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:38 compute-0 python3.9[137987]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:38 compute-0 ceph-mon[75071]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:38 compute-0 sudo[137985]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:38 compute-0 sudo[138110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-themqeftbeyilhkupqhpzruvszchkpvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597957.710364-202-31786751559276/AnsiballZ_copy.py'
Dec 13 03:52:38 compute-0 sudo[138110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:38 compute-0 python3.9[138112]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597957.710364-202-31786751559276/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:38 compute-0 sudo[138110]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:39 compute-0 sudo[138262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aninibubbfagdywdjfipckkujhjwqaoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597959.0496337-217-189643744979758/AnsiballZ_stat.py'
Dec 13 03:52:39 compute-0 sudo[138262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:39 compute-0 python3.9[138264]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:39 compute-0 sudo[138262]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:40 compute-0 sudo[138387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwzotcsedfjvmnfwrrcfqzkcppnfisrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597959.0496337-217-189643744979758/AnsiballZ_copy.py'
Dec 13 03:52:40 compute-0 sudo[138387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:40 compute-0 ceph-mon[75071]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:40 compute-0 python3.9[138389]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765597959.0496337-217-189643744979758/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:40 compute-0 sudo[138387]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:52:40
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'backups', 'volumes', '.mgr', 'vms']
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:52:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:40 compute-0 sudo[138539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flvxaxehcnjwpnrgwavudrfnpthdxpfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597960.573048-232-40354271949017/AnsiballZ_file.py'
Dec 13 03:52:40 compute-0 sudo[138539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:41 compute-0 python3.9[138541]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:41 compute-0 sudo[138539]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:41 compute-0 sudo[138691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsmheaholjhlzujaafguhoglfcsbzylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597961.4037967-240-254071820048510/AnsiballZ_command.py'
Dec 13 03:52:41 compute-0 sudo[138691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:41 compute-0 python3.9[138693]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:41 compute-0 sudo[138691]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:42 compute-0 ceph-mon[75071]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:52:42 compute-0 sudo[138846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sumndxhmnjljwkrsoxemsbuspqngmshx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597962.1388006-248-162216133143869/AnsiballZ_blockinfile.py'
Dec 13 03:52:42 compute-0 sudo[138846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:42 compute-0 python3.9[138848]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:42 compute-0 sudo[138846]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:43 compute-0 sudo[138998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ohfzfvsgzgdybhdryoswgwrqunpbivof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597963.0497587-257-226164235220019/AnsiballZ_command.py'
Dec 13 03:52:43 compute-0 sudo[138998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:43 compute-0 python3.9[139000]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:43 compute-0 sudo[138998]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:44 compute-0 sudo[139151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksdohpvcvorcnptnxihqbqpxrvzvdfpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597963.7590044-265-65112225794486/AnsiballZ_stat.py'
Dec 13 03:52:44 compute-0 sudo[139151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:44 compute-0 python3.9[139153]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:52:44 compute-0 sudo[139151]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:44 compute-0 ceph-mon[75071]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:44 compute-0 sudo[139305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jprhmlumsuriayqgnzwtskvzgusadjox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597964.4588127-273-258182309858838/AnsiballZ_command.py'
Dec 13 03:52:44 compute-0 sudo[139305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:44 compute-0 python3.9[139307]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:44 compute-0 sudo[139305]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:45 compute-0 ceph-mon[75071]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:45 compute-0 sudo[139460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjefirigumoswbnievhiazuvkqgxtmnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597965.120138-281-237449535272266/AnsiballZ_file.py'
Dec 13 03:52:45 compute-0 sudo[139460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:45 compute-0 python3.9[139462]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:45 compute-0 sudo[139460]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:47 compute-0 python3.9[139612]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:52:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:48 compute-0 sudo[139763]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvdhentvmpfmsfpseholjhibvypslhhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597967.9530303-321-124965900191597/AnsiballZ_command.py'
Dec 13 03:52:48 compute-0 sudo[139763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:48 compute-0 python3.9[139765]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:cb:58:d7:dd" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:48 compute-0 ovs-vsctl[139766]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:cb:58:d7:dd external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec 13 03:52:48 compute-0 sudo[139763]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:48 compute-0 ceph-mon[75071]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:48 compute-0 sudo[139916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocjgswubwxqycfeigyxvyuprumfctjfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597968.6335545-330-103064709371361/AnsiballZ_command.py'
Dec 13 03:52:48 compute-0 sudo[139916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:49 compute-0 python3.9[139918]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:49 compute-0 sudo[139916]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:49 compute-0 sudo[140071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpqvrzidhhhtolzbjnjfavntxnqerlck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597969.2988942-338-201115434821073/AnsiballZ_command.py'
Dec 13 03:52:49 compute-0 sudo[140071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:49 compute-0 ceph-mon[75071]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:49 compute-0 python3.9[140073]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:52:49 compute-0 ovs-vsctl[140074]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec 13 03:52:49 compute-0 sudo[140071]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:50 compute-0 python3.9[140224]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:52:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:51 compute-0 sudo[140376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpljvgcjmnbilguxmzsywyghicwpybrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597970.7880592-355-108527955051758/AnsiballZ_file.py'
Dec 13 03:52:51 compute-0 sudo[140376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:51 compute-0 python3.9[140378]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:52:51 compute-0 sudo[140376]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:51 compute-0 sudo[140528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xznqtrvltfmtwivvzuyyigxmdlommnia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597971.4642634-363-235718420119087/AnsiballZ_stat.py'
Dec 13 03:52:51 compute-0 sudo[140528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:51 compute-0 ceph-mon[75071]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:51 compute-0 python3.9[140530]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:52 compute-0 sudo[140528]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:52:52 compute-0 sudo[140606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cllamdcurlurdmxewzypgxddqtwfbihd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597971.4642634-363-235718420119087/AnsiballZ_file.py'
Dec 13 03:52:52 compute-0 sudo[140606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:52 compute-0 python3.9[140608]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:52:52 compute-0 sudo[140606]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:52 compute-0 sudo[140758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiamuliwkigaskxvbfeirnxurllyzesz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597972.6402042-363-18197454148327/AnsiballZ_stat.py'
Dec 13 03:52:52 compute-0 sudo[140758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:53 compute-0 python3.9[140760]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:53 compute-0 sudo[140758]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:53 compute-0 sudo[140836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mimcdvrxebhpvmnswrwrtckzkyopcfyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597972.6402042-363-18197454148327/AnsiballZ_file.py'
Dec 13 03:52:53 compute-0 sudo[140836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:53 compute-0 python3.9[140838]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:52:53 compute-0 sudo[140836]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:53 compute-0 ceph-mon[75071]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:54 compute-0 sudo[140988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rilirxdviwywztxblhcpprncgifbykup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597973.78696-386-152148170459682/AnsiballZ_file.py'
Dec 13 03:52:54 compute-0 sudo[140988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:54 compute-0 python3.9[140990]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:54 compute-0 sudo[140988]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:54 compute-0 sudo[141140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztqserjflnjkcvpvowvtsafwcmzobttb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597974.4949105-394-79013210024441/AnsiballZ_stat.py'
Dec 13 03:52:54 compute-0 sudo[141140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:54 compute-0 python3.9[141142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:55 compute-0 sudo[141140]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:55 compute-0 sudo[141218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-halzcxfufrflxjzzqthocunxpsmhvqma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597974.4949105-394-79013210024441/AnsiballZ_file.py'
Dec 13 03:52:55 compute-0 sudo[141218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:55 compute-0 python3.9[141220]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:55 compute-0 sudo[141218]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:55 compute-0 sudo[141370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tybefyuxbclmdxzbnhzmawzjsfqeyuve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597975.639543-406-268295863145978/AnsiballZ_stat.py'
Dec 13 03:52:55 compute-0 sudo[141370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:56 compute-0 python3.9[141372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:56 compute-0 sudo[141370]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:56 compute-0 sudo[141448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbogytrljeyrqhlhizxqmhdssymzulqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597975.639543-406-268295863145978/AnsiballZ_file.py'
Dec 13 03:52:56 compute-0 sudo[141448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:56 compute-0 ceph-mon[75071]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:56 compute-0 python3.9[141450]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:56 compute-0 sudo[141448]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:57 compute-0 sudo[141600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvjuejoswqyrqiljsnlmjvnkzgzisdyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597976.7683985-418-203802799558433/AnsiballZ_systemd.py'
Dec 13 03:52:57 compute-0 sudo[141600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:57 compute-0 ceph-mon[75071]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:57 compute-0 python3.9[141602]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:52:57 compute-0 systemd[1]: Reloading.
Dec 13 03:52:57 compute-0 systemd-rc-local-generator[141625]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:52:57 compute-0 systemd-sysv-generator[141632]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:52:57 compute-0 sudo[141600]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:52:58 compute-0 sudo[141789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnuzgkcsaqrctwcecpozrkjwsfxgembn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597978.0404844-426-237833329582263/AnsiballZ_stat.py'
Dec 13 03:52:58 compute-0 sudo[141789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:58 compute-0 python3.9[141791]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:58 compute-0 sudo[141789]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:52:58 compute-0 sudo[141867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhjhwywwqmsgcljgittgxdossqbcyobh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597978.0404844-426-237833329582263/AnsiballZ_file.py'
Dec 13 03:52:58 compute-0 sudo[141867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:58 compute-0 python3.9[141869]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:52:59 compute-0 sudo[141867]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:59 compute-0 sudo[142019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkjbglhnzyhnkxunalnkljdbhdgcvuqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597979.1358602-438-237768476710829/AnsiballZ_stat.py'
Dec 13 03:52:59 compute-0 sudo[142019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:52:59 compute-0 python3.9[142021]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:52:59 compute-0 sudo[142019]: pam_unix(sudo:session): session closed for user root
Dec 13 03:52:59 compute-0 sudo[142097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-biseaulauysuqmbmuhvmgortwbxborov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597979.1358602-438-237768476710829/AnsiballZ_file.py'
Dec 13 03:52:59 compute-0 sudo[142097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:00 compute-0 python3.9[142099]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:53:00 compute-0 sudo[142097]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:00 compute-0 ceph-mon[75071]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:01 compute-0 sudo[142249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgcuamrnntcfrxicitpnfrgceyoehxxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597980.2857356-450-149399019596414/AnsiballZ_systemd.py'
Dec 13 03:53:01 compute-0 sudo[142249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:01 compute-0 python3.9[142251]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:53:01 compute-0 systemd[1]: Reloading.
Dec 13 03:53:01 compute-0 systemd-rc-local-generator[142277]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:53:01 compute-0 systemd-sysv-generator[142280]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:53:01 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 03:53:01 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 03:53:01 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 03:53:01 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 03:53:01 compute-0 sudo[142249]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:01 compute-0 ceph-mon[75071]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:02 compute-0 sudo[142442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tojtmshoivjhvdsstpurcdrhxcuhmbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597982.0342333-460-118496969689742/AnsiballZ_file.py'
Dec 13 03:53:02 compute-0 sudo[142442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:02 compute-0 python3.9[142444]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:02 compute-0 sudo[142442]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:02 compute-0 sudo[142594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrbajwxsmcrpmfmndqcobltumlyfkenv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597982.7000234-468-117094487643467/AnsiballZ_stat.py'
Dec 13 03:53:02 compute-0 sudo[142594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:03 compute-0 python3.9[142596]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:03 compute-0 sudo[142594]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:03 compute-0 sudo[142717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csnustibqcicpdecuoszazwijrvrcmuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597982.7000234-468-117094487643467/AnsiballZ_copy.py'
Dec 13 03:53:03 compute-0 sudo[142717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:04 compute-0 python3.9[142719]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765597982.7000234-468-117094487643467/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:04 compute-0 sudo[142717]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:04 compute-0 ceph-mon[75071]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:04 compute-0 sudo[142869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltbpjhopdletqgqjxbqunyopltrqyzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597984.6707861-485-39381248236728/AnsiballZ_file.py'
Dec 13 03:53:04 compute-0 sudo[142869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:05 compute-0 python3.9[142871]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:05 compute-0 sudo[142869]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:05 compute-0 sudo[143021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjulisosztwktecktzsjfydiajnzgplf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597985.3813055-493-20034670042773/AnsiballZ_stat.py'
Dec 13 03:53:05 compute-0 sudo[143021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:05 compute-0 ceph-mon[75071]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:05 compute-0 python3.9[143023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:05 compute-0 sudo[143021]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:06 compute-0 sudo[143144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhiffvlwfvuiqmvgerqcyminukaglfio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597985.3813055-493-20034670042773/AnsiballZ_copy.py'
Dec 13 03:53:06 compute-0 sudo[143144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:06 compute-0 python3.9[143146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765597985.3813055-493-20034670042773/.source.json _original_basename=.mmhalxbh follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:53:06 compute-0 sudo[143144]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:06 compute-0 sudo[143297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baielzbqmlnbbwtdkegemhrgyrawigus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597986.584321-508-24579795264120/AnsiballZ_file.py'
Dec 13 03:53:06 compute-0 sudo[143297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:07 compute-0 ceph-mon[75071]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:07 compute-0 python3.9[143299]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:53:07 compute-0 sudo[143297]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:08 compute-0 sudo[143450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xijjlaciyixqffufcqloqcpqvbzjvvjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597987.9774127-516-17189439897543/AnsiballZ_stat.py'
Dec 13 03:53:08 compute-0 sudo[143450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:08 compute-0 sudo[143450]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:08 compute-0 sudo[143573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmbtcyturdzcqtihofmrqnxjcdjtcnfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597987.9774127-516-17189439897543/AnsiballZ_copy.py'
Dec 13 03:53:08 compute-0 sudo[143573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:09 compute-0 sudo[143573]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:09 compute-0 ceph-mon[75071]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:09 compute-0 sudo[143725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prydkgagotcemquewpibtvplwjwcwjjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597989.496727-533-198089885808430/AnsiballZ_container_config_data.py'
Dec 13 03:53:09 compute-0 sudo[143725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:10 compute-0 python3.9[143727]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec 13 03:53:10 compute-0 sudo[143725]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:10 compute-0 sudo[143877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhyfkfgrkwvenirnxtlflnwlvmjiyygg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597990.3596857-542-197079929942879/AnsiballZ_container_config_hash.py'
Dec 13 03:53:10 compute-0 sudo[143877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:11 compute-0 python3.9[143879]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 03:53:11 compute-0 sudo[143877]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:11 compute-0 sudo[144029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ontkosyjqbylebszxxxpievgftemzpew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597991.263674-551-124920968028762/AnsiballZ_podman_container_info.py'
Dec 13 03:53:11 compute-0 sudo[144029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:11 compute-0 ceph-mon[75071]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:11 compute-0 python3.9[144031]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 03:53:12 compute-0 sudo[144029]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:13 compute-0 sudo[144206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmlqctqmvtcfmkwvtjeaznabgtqzublu ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765597992.6701283-564-175397930707963/AnsiballZ_edpm_container_manage.py'
Dec 13 03:53:13 compute-0 sudo[144206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:13 compute-0 python3[144208]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 03:53:13 compute-0 ceph-mon[75071]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:53:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Cumulative writes: 2048 writes, 9123 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2048 writes, 9123 keys, 2048 commit groups, 1.0 writes per commit group, ingest: 12.12 MB, 0.02 MB/s
                                           Interval WAL: 2048 writes, 2048 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.5      0.12              0.02         3    0.041       0      0       0.0       0.0
                                             L6      1/0    6.84 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    133.4    117.5      0.12              0.04         2    0.061    7266    734       0.0       0.0
                                            Sum      1/0    6.84 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     66.4     94.4      0.24              0.06         5    0.049    7266    734       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     85.9    121.9      0.19              0.06         4    0.047    7266    734       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    133.4    117.5      0.12              0.04         2    0.061    7266    734       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.9      0.07              0.02         2    0.034       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 308.00 MB usage: 631.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(37,539.69 KB,0.171116%) FilterBlock(6,28.61 KB,0.00907105%) IndexBlock(6,62.92 KB,0.0199504%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 03:53:15 compute-0 ceph-mon[75071]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:17 compute-0 ceph-mon[75071]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:19 compute-0 podman[144221]: 2025-12-13 03:53:19.149652809 +0000 UTC m=+5.633150011 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 13 03:53:19 compute-0 podman[144341]: 2025-12-13 03:53:19.30705915 +0000 UTC m=+0.050482291 container create 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Dec 13 03:53:19 compute-0 podman[144341]: 2025-12-13 03:53:19.280493489 +0000 UTC m=+0.023916650 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 13 03:53:19 compute-0 python3[144208]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec 13 03:53:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:19 compute-0 sudo[144206]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:19 compute-0 sudo[144529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymdvlofeijijxfoxngwlsjdwxjhaglbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765597999.6042926-572-263567153248999/AnsiballZ_stat.py'
Dec 13 03:53:19 compute-0 sudo[144529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:19 compute-0 ceph-mon[75071]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:20 compute-0 python3.9[144531]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:53:20 compute-0 sudo[144529]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:20 compute-0 sudo[144683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmxmufgpalravumjhbhyhnoscjfwxol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598000.3218613-581-81561297707690/AnsiballZ_file.py'
Dec 13 03:53:20 compute-0 sudo[144683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:20 compute-0 python3.9[144685]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:53:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:20 compute-0 sudo[144683]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:21 compute-0 sudo[144759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpegkqzgusnlhyxjfqricgtwbxyibxib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598000.3218613-581-81561297707690/AnsiballZ_stat.py'
Dec 13 03:53:21 compute-0 sudo[144759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:21 compute-0 python3.9[144761]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:53:21 compute-0 sudo[144759]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:21 compute-0 sudo[144910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctagvpubbybyfjavcaavufkycuvobao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598001.3431823-581-185787054589243/AnsiballZ_copy.py'
Dec 13 03:53:21 compute-0 sudo[144910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:21 compute-0 python3.9[144912]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765598001.3431823-581-185787054589243/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:53:22 compute-0 sudo[144910]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:22 compute-0 ceph-mon[75071]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:22 compute-0 sudo[144986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okvhbswkrjdnqzbultgfilhjhbtbrvmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598001.3431823-581-185787054589243/AnsiballZ_systemd.py'
Dec 13 03:53:22 compute-0 sudo[144986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:22 compute-0 python3.9[144988]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 03:53:22 compute-0 systemd[1]: Reloading.
Dec 13 03:53:22 compute-0 systemd-sysv-generator[145019]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:53:22 compute-0 systemd-rc-local-generator[145015]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:53:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:22 compute-0 sudo[144986]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:23 compute-0 sudo[145097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpxygqqqzmuydrmhfqjosldbaygcavg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598001.3431823-581-185787054589243/AnsiballZ_systemd.py'
Dec 13 03:53:23 compute-0 sudo[145097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:23 compute-0 python3.9[145099]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:53:23 compute-0 systemd[1]: Reloading.
Dec 13 03:53:23 compute-0 ceph-mon[75071]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:23 compute-0 systemd-rc-local-generator[145129]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:53:23 compute-0 systemd-sysv-generator[145132]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:53:23 compute-0 sudo[145137]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:53:23 compute-0 sudo[145137]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:23 compute-0 sudo[145137]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:23 compute-0 systemd[1]: Starting ovn_controller container...
Dec 13 03:53:23 compute-0 sudo[145165]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:53:23 compute-0 sudo[145165]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5d2fca7d670c10b8a0bc04f188118c0a8925664e3b953d7645cab4e022ef53/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:24 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b.
Dec 13 03:53:24 compute-0 podman[145172]: 2025-12-13 03:53:24.039213383 +0000 UTC m=+0.135369952 container init 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 03:53:24 compute-0 ovn_controller[145204]: + sudo -E kolla_set_configs
Dec 13 03:53:24 compute-0 podman[145172]: 2025-12-13 03:53:24.069540634 +0000 UTC m=+0.165697193 container start 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:53:24 compute-0 edpm-start-podman-container[145172]: ovn_controller
Dec 13 03:53:24 compute-0 systemd[1]: Created slice User Slice of UID 0.
Dec 13 03:53:24 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec 13 03:53:24 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec 13 03:53:24 compute-0 systemd[1]: Starting User Manager for UID 0...
Dec 13 03:53:24 compute-0 systemd[145232]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Dec 13 03:53:24 compute-0 edpm-start-podman-container[145164]: Creating additional drop-in dependency for "ovn_controller" (1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b)
Dec 13 03:53:24 compute-0 systemd[1]: Reloading.
Dec 13 03:53:24 compute-0 podman[145211]: 2025-12-13 03:53:24.199910441 +0000 UTC m=+0.104679291 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 03:53:24 compute-0 systemd[145232]: Queued start job for default target Main User Target.
Dec 13 03:53:24 compute-0 systemd-rc-local-generator[145301]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:53:24 compute-0 systemd[145232]: Created slice User Application Slice.
Dec 13 03:53:24 compute-0 systemd[145232]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec 13 03:53:24 compute-0 systemd[145232]: Started Daily Cleanup of User's Temporary Directories.
Dec 13 03:53:24 compute-0 systemd[145232]: Reached target Paths.
Dec 13 03:53:24 compute-0 systemd[145232]: Reached target Timers.
Dec 13 03:53:24 compute-0 systemd[145232]: Starting D-Bus User Message Bus Socket...
Dec 13 03:53:24 compute-0 systemd[145232]: Starting Create User's Volatile Files and Directories...
Dec 13 03:53:24 compute-0 systemd-sysv-generator[145305]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:53:24 compute-0 systemd[145232]: Listening on D-Bus User Message Bus Socket.
Dec 13 03:53:24 compute-0 systemd[145232]: Reached target Sockets.
Dec 13 03:53:24 compute-0 systemd[145232]: Finished Create User's Volatile Files and Directories.
Dec 13 03:53:24 compute-0 systemd[145232]: Reached target Basic System.
Dec 13 03:53:24 compute-0 systemd[145232]: Reached target Main User Target.
Dec 13 03:53:24 compute-0 systemd[145232]: Startup finished in 164ms.
Dec 13 03:53:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:24 compute-0 sudo[145165]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:53:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:53:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:53:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:53:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:24 compute-0 systemd[1]: Started User Manager for UID 0.
Dec 13 03:53:24 compute-0 systemd[1]: Started ovn_controller container.
Dec 13 03:53:24 compute-0 systemd[1]: 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b-20d4dd8cd0e36abd.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 03:53:24 compute-0 systemd[1]: 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b-20d4dd8cd0e36abd.service: Failed with result 'exit-code'.
Dec 13 03:53:24 compute-0 systemd[1]: Started Session c1 of User root.
Dec 13 03:53:25 compute-0 sudo[145097]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:25 compute-0 ovn_controller[145204]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 03:53:25 compute-0 ovn_controller[145204]: INFO:__main__:Validating config file
Dec 13 03:53:25 compute-0 ovn_controller[145204]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 03:53:25 compute-0 ovn_controller[145204]: INFO:__main__:Writing out command to execute
Dec 13 03:53:25 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: ++ cat /run_command
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + ARGS=
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + sudo kolla_copy_cacerts
Dec 13 03:53:25 compute-0 systemd[1]: Started Session c2 of User root.
Dec 13 03:53:25 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + [[ ! -n '' ]]
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + . kolla_extend_start
Dec 13 03:53:25 compute-0 ovn_controller[145204]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + umask 0022
Dec 13 03:53:25 compute-0 ovn_controller[145204]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.1195] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.1204] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <warn>  [1765598005.1207] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.1214] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.1219] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.1222] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 03:53:25 compute-0 kernel: br-int: entered promiscuous mode
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00021|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00022|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00023|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00024|main|INFO|OVS feature set changed, force recompute.
Dec 13 03:53:25 compute-0 systemd-udevd[145390]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 03:53:25 compute-0 ovn_controller[145204]: 2025-12-13T03:53:25Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.2704] manager: (ovn-f10baa-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec 13 03:53:25 compute-0 sudo[145499]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evsipkiouoddxwowwngtnnlrqfnshvrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598005.1883948-609-85776081110209/AnsiballZ_command.py'
Dec 13 03:53:25 compute-0 sudo[145499]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:25 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.7102] device (genev_sys_6081): carrier: link connected
Dec 13 03:53:25 compute-0 NetworkManager[48899]: <info>  [1765598005.7105] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Dec 13 03:53:25 compute-0 systemd-udevd[145395]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 03:53:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:53:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:53:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:53:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:53:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:53:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:53:25 compute-0 python3.9[145501]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:53:25 compute-0 ovs-vsctl[145527]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec 13 03:53:25 compute-0 sudo[145504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:53:25 compute-0 sudo[145504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:25 compute-0 sudo[145504]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:25 compute-0 sudo[145499]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:25 compute-0 sudo[145530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:53:25 compute-0 sudo[145530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.248870421 +0000 UTC m=+0.048273653 container create 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 03:53:26 compute-0 systemd[1]: Started libpod-conmon-41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2.scope.
Dec 13 03:53:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.227073218 +0000 UTC m=+0.026476480 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.339546956 +0000 UTC m=+0.138950218 container init 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:26 compute-0 sudo[145734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azyaazuluqecycyhajpabysdmqqsjipt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598006.0596497-617-251935757559809/AnsiballZ_command.py'
Dec 13 03:53:26 compute-0 sudo[145734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.349548943 +0000 UTC m=+0.148952175 container start 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:26 compute-0 systemd[1]: libpod-41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2.scope: Deactivated successfully.
Dec 13 03:53:26 compute-0 distracted_brahmagupta[145714]: 167 167
Dec 13 03:53:26 compute-0 conmon[145714]: conmon 41233ff2d120ab05a1c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2.scope/container/memory.events
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.360934868 +0000 UTC m=+0.160338100 container attach 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.363064915 +0000 UTC m=+0.162468167 container died 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 03:53:26 compute-0 python3.9[145737]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:53:26 compute-0 ovs-vsctl[145753]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec 13 03:53:26 compute-0 sudo[145734]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-3325085102a23ae287d1866ac4f2e1bd1e1164fc41b790d73ea3caa15a35721c-merged.mount: Deactivated successfully.
Dec 13 03:53:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:26 compute-0 ceph-mon[75071]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:53:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:53:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:53:26 compute-0 podman[145665]: 2025-12-13 03:53:26.875167501 +0000 UTC m=+0.674570733 container remove 41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:53:26 compute-0 systemd[1]: libpod-conmon-41233ff2d120ab05a1c022637b4ffc32383504d2ac9a9818f359faba1fc922c2.scope: Deactivated successfully.
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.051158748 +0000 UTC m=+0.047898482 container create e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:27 compute-0 systemd[1]: Started libpod-conmon-e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65.scope.
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.031708698 +0000 UTC m=+0.028448452 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.155202171 +0000 UTC m=+0.151941935 container init e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:27 compute-0 sudo[145930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwsxqqopsdjfxsbjgxgtvjmprtjrtjke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598006.8655446-631-25717906782251/AnsiballZ_command.py'
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.168087846 +0000 UTC m=+0.164827590 container start e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.172650777 +0000 UTC m=+0.169390531 container attach e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Dec 13 03:53:27 compute-0 sudo[145930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:27 compute-0 python3.9[145934]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:53:27 compute-0 ovs-vsctl[145936]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec 13 03:53:27 compute-0 sudo[145930]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:27 compute-0 eager_mahavira[145901]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:53:27 compute-0 eager_mahavira[145901]: --> All data devices are unavailable
Dec 13 03:53:27 compute-0 systemd[1]: libpod-e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65.scope: Deactivated successfully.
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.700055304 +0000 UTC m=+0.696795058 container died e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:53:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-bc42d5fc7986452cb27353800cf1e8e9d04ef0a80a3a0de349b1cdda27874388-merged.mount: Deactivated successfully.
Dec 13 03:53:27 compute-0 podman[145861]: 2025-12-13 03:53:27.75597867 +0000 UTC m=+0.752718404 container remove e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 03:53:27 compute-0 systemd[1]: libpod-conmon-e9d3eaf9200fa28fb3891f4e24058316555b65ef96d0acc9c33f1ae1a58b4e65.scope: Deactivated successfully.
Dec 13 03:53:27 compute-0 sshd-session[133942]: Connection closed by 192.168.122.30 port 48434
Dec 13 03:53:27 compute-0 sshd-session[133939]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:53:27 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Dec 13 03:53:27 compute-0 systemd[1]: session-46.scope: Consumed 1min 278ms CPU time.
Dec 13 03:53:27 compute-0 systemd-logind[796]: Session 46 logged out. Waiting for processes to exit.
Dec 13 03:53:27 compute-0 systemd-logind[796]: Removed session 46.
Dec 13 03:53:27 compute-0 sudo[145530]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:27 compute-0 sudo[145986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:53:27 compute-0 sudo[145986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:27 compute-0 sudo[145986]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:27 compute-0 ceph-mon[75071]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:27 compute-0 sudo[146011]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:53:27 compute-0 sudo[146011]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.284097714 +0000 UTC m=+0.050137642 container create afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:28 compute-0 systemd[1]: Started libpod-conmon-afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b.scope.
Dec 13 03:53:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.262695381 +0000 UTC m=+0.028735329 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.371922352 +0000 UTC m=+0.137962310 container init afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.379897966 +0000 UTC m=+0.145937894 container start afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.385419764 +0000 UTC m=+0.151459692 container attach afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:28 compute-0 confident_goodall[146065]: 167 167
Dec 13 03:53:28 compute-0 systemd[1]: libpod-afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b.scope: Deactivated successfully.
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.387456268 +0000 UTC m=+0.153496196 container died afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:53:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-93a7bbf65e946bbc495b61d2a9035f390be1d8aac084ed1dc05ac0c9329554df-merged.mount: Deactivated successfully.
Dec 13 03:53:28 compute-0 podman[146048]: 2025-12-13 03:53:28.434696722 +0000 UTC m=+0.200736650 container remove afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_goodall, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:53:28 compute-0 systemd[1]: libpod-conmon-afbdffe130181ea6fc5cdce3e9ff4a9d72edb5d7e7a921001a37733fce63bd2b.scope: Deactivated successfully.
Dec 13 03:53:28 compute-0 podman[146089]: 2025-12-13 03:53:28.612334463 +0000 UTC m=+0.049055323 container create 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:53:28 compute-0 systemd[1]: Started libpod-conmon-576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d.scope.
Dec 13 03:53:28 compute-0 podman[146089]: 2025-12-13 03:53:28.590647542 +0000 UTC m=+0.027368422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f0552118ea4b2142f988e113a4d3a482214732653eadc3f2d884904446026c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f0552118ea4b2142f988e113a4d3a482214732653eadc3f2d884904446026c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f0552118ea4b2142f988e113a4d3a482214732653eadc3f2d884904446026c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9f0552118ea4b2142f988e113a4d3a482214732653eadc3f2d884904446026c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:28 compute-0 podman[146089]: 2025-12-13 03:53:28.702940357 +0000 UTC m=+0.139661227 container init 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:53:28 compute-0 podman[146089]: 2025-12-13 03:53:28.714927037 +0000 UTC m=+0.151647897 container start 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:53:28 compute-0 podman[146089]: 2025-12-13 03:53:28.728359787 +0000 UTC m=+0.165080647 container attach 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]: {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     "0": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "devices": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "/dev/loop3"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             ],
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_name": "ceph_lv0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_size": "21470642176",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "name": "ceph_lv0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "tags": {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_name": "ceph",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.crush_device_class": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.encrypted": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.objectstore": "bluestore",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_id": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.vdo": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.with_tpm": "0"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             },
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "vg_name": "ceph_vg0"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         }
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     ],
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     "1": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "devices": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "/dev/loop4"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             ],
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_name": "ceph_lv1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_size": "21470642176",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "name": "ceph_lv1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "tags": {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_name": "ceph",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.crush_device_class": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.encrypted": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.objectstore": "bluestore",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_id": "1",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.vdo": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.with_tpm": "0"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             },
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "vg_name": "ceph_vg1"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         }
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     ],
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     "2": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "devices": [
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "/dev/loop5"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             ],
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_name": "ceph_lv2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_size": "21470642176",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "name": "ceph_lv2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "tags": {
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.cluster_name": "ceph",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.crush_device_class": "",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.encrypted": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.objectstore": "bluestore",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osd_id": "2",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.vdo": "0",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:                 "ceph.with_tpm": "0"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             },
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "type": "block",
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:             "vg_name": "ceph_vg2"
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:         }
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]:     ]
Dec 13 03:53:29 compute-0 cool_stonebraker[146105]: }
Dec 13 03:53:29 compute-0 systemd[1]: libpod-576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d.scope: Deactivated successfully.
Dec 13 03:53:29 compute-0 podman[146089]: 2025-12-13 03:53:29.068585717 +0000 UTC m=+0.505306577 container died 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9f0552118ea4b2142f988e113a4d3a482214732653eadc3f2d884904446026c-merged.mount: Deactivated successfully.
Dec 13 03:53:29 compute-0 podman[146089]: 2025-12-13 03:53:29.338733793 +0000 UTC m=+0.775454653 container remove 576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_stonebraker, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:53:29 compute-0 systemd[1]: libpod-conmon-576a72f4bf2a2e129b8c3e8394dda714841801e542b13f79c94bf25438a6292d.scope: Deactivated successfully.
Dec 13 03:53:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:29 compute-0 sudo[146011]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:29 compute-0 sudo[146126]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:53:29 compute-0 sudo[146126]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:29 compute-0 sudo[146126]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:29 compute-0 sudo[146151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:53:29 compute-0 sudo[146151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.861229937 +0000 UTC m=+0.045747255 container create d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 03:53:29 compute-0 systemd[1]: Started libpod-conmon-d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499.scope.
Dec 13 03:53:29 compute-0 ceph-mon[75071]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.836886125 +0000 UTC m=+0.021403463 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.944824612 +0000 UTC m=+0.129341950 container init d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.960020008 +0000 UTC m=+0.144537346 container start d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.964407106 +0000 UTC m=+0.148924434 container attach d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:53:29 compute-0 flamboyant_mcclintock[146203]: 167 167
Dec 13 03:53:29 compute-0 systemd[1]: libpod-d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499.scope: Deactivated successfully.
Dec 13 03:53:29 compute-0 podman[146187]: 2025-12-13 03:53:29.967518079 +0000 UTC m=+0.152035417 container died d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:53:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8525cb0cd80ee1cb281d4234fac24fd40b871fef93c4cc6ad5af6ce0503df1-merged.mount: Deactivated successfully.
Dec 13 03:53:30 compute-0 podman[146187]: 2025-12-13 03:53:30.01056339 +0000 UTC m=+0.195080728 container remove d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:53:30 compute-0 systemd[1]: libpod-conmon-d3741d7e7a0b98528ba5a1262094027c77b1b218922f95b935c8a9e807d43499.scope: Deactivated successfully.
Dec 13 03:53:30 compute-0 podman[146229]: 2025-12-13 03:53:30.155140947 +0000 UTC m=+0.022053361 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:53:30 compute-0 podman[146229]: 2025-12-13 03:53:30.319907095 +0000 UTC m=+0.186819509 container create cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:53:30 compute-0 systemd[1]: Started libpod-conmon-cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0.scope.
Dec 13 03:53:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a91f13c9f32533e7acb1e1205543c39b83062eba1329cb9916a8ab07e3fc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a91f13c9f32533e7acb1e1205543c39b83062eba1329cb9916a8ab07e3fc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a91f13c9f32533e7acb1e1205543c39b83062eba1329cb9916a8ab07e3fc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a59a91f13c9f32533e7acb1e1205543c39b83062eba1329cb9916a8ab07e3fc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:53:30 compute-0 podman[146229]: 2025-12-13 03:53:30.538378327 +0000 UTC m=+0.405290811 container init cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 03:53:30 compute-0 podman[146229]: 2025-12-13 03:53:30.55271101 +0000 UTC m=+0.419623404 container start cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:53:30 compute-0 podman[146229]: 2025-12-13 03:53:30.571089511 +0000 UTC m=+0.438001995 container attach cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:53:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:31 compute-0 lvm[146324]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:53:31 compute-0 lvm[146324]: VG ceph_vg0 finished
Dec 13 03:53:31 compute-0 lvm[146325]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:53:31 compute-0 lvm[146325]: VG ceph_vg1 finished
Dec 13 03:53:31 compute-0 lvm[146327]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:53:31 compute-0 lvm[146327]: VG ceph_vg2 finished
Dec 13 03:53:31 compute-0 confident_kirch[146246]: {}
Dec 13 03:53:31 compute-0 systemd[1]: libpod-cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0.scope: Deactivated successfully.
Dec 13 03:53:31 compute-0 systemd[1]: libpod-cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0.scope: Consumed 1.510s CPU time.
Dec 13 03:53:31 compute-0 podman[146229]: 2025-12-13 03:53:31.455174817 +0000 UTC m=+1.322087211 container died cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 03:53:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-a59a91f13c9f32533e7acb1e1205543c39b83062eba1329cb9916a8ab07e3fc1-merged.mount: Deactivated successfully.
Dec 13 03:53:31 compute-0 podman[146229]: 2025-12-13 03:53:31.50690501 +0000 UTC m=+1.373817404 container remove cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_kirch, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:53:31 compute-0 systemd[1]: libpod-conmon-cdc97ea085aea5d6aac389c3659df7c1e0341374994dffa7bc54447bf963b8e0.scope: Deactivated successfully.
Dec 13 03:53:31 compute-0 sudo[146151]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:53:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:53:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:31 compute-0 sudo[146341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:53:31 compute-0 sudo[146341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:53:31 compute-0 sudo[146341]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:31 compute-0 ceph-mon[75071]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:53:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:33 compute-0 sshd-session[146366]: Accepted publickey for zuul from 192.168.122.30 port 48090 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:53:33 compute-0 systemd-logind[796]: New session 48 of user zuul.
Dec 13 03:53:33 compute-0 systemd[1]: Started Session 48 of User zuul.
Dec 13 03:53:33 compute-0 sshd-session[146366]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:53:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:35 compute-0 systemd[1]: Stopping User Manager for UID 0...
Dec 13 03:53:35 compute-0 systemd[145232]: Activating special unit Exit the Session...
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped target Main User Target.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped target Basic System.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped target Paths.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped target Sockets.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped target Timers.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped Daily Cleanup of User's Temporary Directories.
Dec 13 03:53:35 compute-0 systemd[145232]: Closed D-Bus User Message Bus Socket.
Dec 13 03:53:35 compute-0 systemd[145232]: Stopped Create User's Volatile Files and Directories.
Dec 13 03:53:35 compute-0 systemd[145232]: Removed slice User Application Slice.
Dec 13 03:53:35 compute-0 systemd[145232]: Reached target Shutdown.
Dec 13 03:53:35 compute-0 systemd[145232]: Finished Exit the Session.
Dec 13 03:53:35 compute-0 systemd[145232]: Reached target Exit the Session.
Dec 13 03:53:35 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Dec 13 03:53:35 compute-0 systemd[1]: Stopped User Manager for UID 0.
Dec 13 03:53:35 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec 13 03:53:35 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec 13 03:53:35 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec 13 03:53:35 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec 13 03:53:35 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Dec 13 03:53:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:35 compute-0 ceph-mon[75071]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:35 compute-0 ceph-mon[75071]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:35 compute-0 python3.9[146519]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:53:36 compute-0 sudo[146675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owcnswgszmyxmzuoaybpaxmbhagtdikf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598016.1616697-34-104046547102760/AnsiballZ_file.py'
Dec 13 03:53:36 compute-0 sudo[146675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:36 compute-0 python3.9[146677]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:36 compute-0 sudo[146675]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:37 compute-0 sudo[146827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awamhgaqpqvnacdwrpjuixxayfkhujhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598016.9856744-34-157940340918589/AnsiballZ_file.py'
Dec 13 03:53:37 compute-0 sudo[146827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:37 compute-0 python3.9[146829]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:37 compute-0 sudo[146827]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:38 compute-0 sudo[146979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djyzccfqaaeokdftzkefdawgbcqwiyvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598017.7210045-34-60481883607932/AnsiballZ_file.py'
Dec 13 03:53:38 compute-0 sudo[146979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:53:40
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['vms', 'images', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'backups', 'default.rgw.control', 'default.rgw.log', 'default.rgw.meta']
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:53:40 compute-0 python3.9[146981]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:40 compute-0 sudo[146979]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:40 compute-0 sudo[147132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewtrgdrluntfqxrlfsbjsttrxxumelrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598020.6925156-34-183449041218831/AnsiballZ_file.py'
Dec 13 03:53:40 compute-0 sudo[147132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:41 compute-0 python3.9[147134]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:41 compute-0 sudo[147132]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:41 compute-0 sudo[147284]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfyjxcthtlhhyaghlfqxhqtvmgkucsrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598021.3034124-34-126158432972593/AnsiballZ_file.py'
Dec 13 03:53:41 compute-0 sudo[147284]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:41 compute-0 python3.9[147286]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:41 compute-0 sudo[147284]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:53:42 compute-0 ceph-mds[95635]: mds.beacon.cephfs.compute-0.bszvvn missed beacon ack from the monitors
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:53:42 compute-0 python3.9[147436]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:53:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:43 compute-0 ceph-mon[75071]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:43 compute-0 sudo[147587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byzgtzcmuhyivabrlwwhmposaxropfxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598022.8098986-78-103527686745934/AnsiballZ_seboolean.py'
Dec 13 03:53:43 compute-0 sudo[147587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:43 compute-0 python3.9[147589]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec 13 03:53:44 compute-0 sudo[147587]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:44 compute-0 ceph-mon[75071]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:44 compute-0 ceph-mon[75071]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:44 compute-0 ceph-mon[75071]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:45 compute-0 python3.9[147739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:45 compute-0 ceph-mon[75071]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:45 compute-0 python3.9[147860]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598024.5073304-86-71593604618550/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:46 compute-0 python3.9[148010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:46 compute-0 python3.9[148131]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598025.971015-101-68546170865881/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:47 compute-0 sudo[148281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkuirbyfxdlqkyexobqrqqweuesctqgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598027.2180538-118-96985849574738/AnsiballZ_setup.py'
Dec 13 03:53:47 compute-0 sudo[148281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:47 compute-0 python3.9[148283]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:53:48 compute-0 sudo[148281]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:48 compute-0 ceph-mon[75071]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:48 compute-0 sudo[148365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmuyizycwiifufqeyjhoxzhcqwaogbbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598027.2180538-118-96985849574738/AnsiballZ_dnf.py'
Dec 13 03:53:48 compute-0 sudo[148365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:48 compute-0 python3.9[148367]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:53:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:50 compute-0 sudo[148365]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:50 compute-0 ceph-mon[75071]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:50 compute-0 sudo[148518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfytokydrzreysuuehvpvrldvoudgzkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598030.284312-130-75235268621470/AnsiballZ_systemd.py'
Dec 13 03:53:50 compute-0 sudo[148518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:51 compute-0 python3.9[148520]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:53:51 compute-0 sudo[148518]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:51 compute-0 ceph-mon[75071]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:51 compute-0 python3.9[148673]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.949559) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598031949634, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 810, "num_deletes": 251, "total_data_size": 1108662, "memory_usage": 1127440, "flush_reason": "Manual Compaction"}
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598031966510, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 1098944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8973, "largest_seqno": 9782, "table_properties": {"data_size": 1094824, "index_size": 1836, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1221, "raw_key_size": 8739, "raw_average_key_size": 18, "raw_value_size": 1086592, "raw_average_value_size": 2321, "num_data_blocks": 85, "num_entries": 468, "num_filter_entries": 468, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597954, "oldest_key_time": 1765597954, "file_creation_time": 1765598031, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 17026 microseconds, and 4950 cpu microseconds.
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.966585) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 1098944 bytes OK
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.966614) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.968098) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.968122) EVENT_LOG_v1 {"time_micros": 1765598031968109, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.968140) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 1104613, prev total WAL file size 1104613, number of live WAL files 2.
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.968895) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(1073KB)], [23(7000KB)]
Dec 13 03:53:51 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598031969009, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8267620, "oldest_snapshot_seqno": -1}
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3347 keys, 6470528 bytes, temperature: kUnknown
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598032028495, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6470528, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6445935, "index_size": 15140, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8389, "raw_key_size": 81183, "raw_average_key_size": 24, "raw_value_size": 6383168, "raw_average_value_size": 1907, "num_data_blocks": 660, "num_entries": 3347, "num_filter_entries": 3347, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598031, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.028880) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6470528 bytes
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.030342) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 138.7 rd, 108.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 6.8 +0.0 blob) out(6.2 +0.0 blob), read-write-amplify(13.4) write-amplify(5.9) OK, records in: 3861, records dropped: 514 output_compression: NoCompression
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.030371) EVENT_LOG_v1 {"time_micros": 1765598032030354, "job": 8, "event": "compaction_finished", "compaction_time_micros": 59607, "compaction_time_cpu_micros": 21003, "output_level": 6, "num_output_files": 1, "total_output_size": 6470528, "num_input_records": 3861, "num_output_records": 3347, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598032030750, "job": 8, "event": "table_file_deletion", "file_number": 25}
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598032032597, "job": 8, "event": "table_file_deletion", "file_number": 23}
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:51.968690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.032694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.032702) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.032703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.032705) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:53:52.032707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:53:52 compute-0 python3.9[148794]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598031.4402668-138-188846882820643/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:53 compute-0 python3.9[148944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:53 compute-0 python3.9[149065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598032.5806072-138-46361454794638/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:53 compute-0 ceph-mon[75071]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:54 compute-0 python3.9[149215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:55 compute-0 ovn_controller[145204]: 2025-12-13T03:53:55Z|00025|memory|INFO|16384 kB peak resident set size after 30.0 seconds
Dec 13 03:53:55 compute-0 ovn_controller[145204]: 2025-12-13T03:53:55Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec 13 03:53:55 compute-0 podman[149310]: 2025-12-13 03:53:55.169656756 +0000 UTC m=+0.116090395 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 03:53:55 compute-0 python3.9[149346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598034.2584662-182-132285207952134/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:55 compute-0 python3.9[149513]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:55 compute-0 ceph-mon[75071]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:56 compute-0 python3.9[149634]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598035.4395747-182-136863426761752/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:57 compute-0 python3.9[149784]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:53:57 compute-0 sudo[149936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujawcztauaxqhluztniqitsheljrgenh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598037.3121538-220-89594812481231/AnsiballZ_file.py'
Dec 13 03:53:57 compute-0 sudo[149936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:57 compute-0 python3.9[149938]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:57 compute-0 sudo[149936]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:58 compute-0 ceph-mon[75071]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:53:58 compute-0 sudo[150088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xilcovanorrmnwxaaifbicbldubfsdzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598037.9798832-228-216073724865546/AnsiballZ_stat.py'
Dec 13 03:53:58 compute-0 sudo[150088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:58 compute-0 python3.9[150090]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:58 compute-0 sudo[150088]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:58 compute-0 sudo[150166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eghlpcnfgnleyoqzjgqktgbikanvlfzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598037.9798832-228-216073724865546/AnsiballZ_file.py'
Dec 13 03:53:58 compute-0 sudo[150166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:53:58 compute-0 python3.9[150168]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:58 compute-0 sudo[150166]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:59 compute-0 sudo[150318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lghwhvwubybtssarxbfjwggredoqtscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598039.0838966-228-245066487103926/AnsiballZ_stat.py'
Dec 13 03:53:59 compute-0 sudo[150318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:59 compute-0 python3.9[150320]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:53:59 compute-0 sudo[150318]: pam_unix(sudo:session): session closed for user root
Dec 13 03:53:59 compute-0 sudo[150396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwafvbwusokjiufarqzewjcdvlchbhsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598039.0838966-228-245066487103926/AnsiballZ_file.py'
Dec 13 03:53:59 compute-0 sudo[150396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:53:59 compute-0 python3.9[150398]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:53:59 compute-0 sudo[150396]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:00 compute-0 ceph-mon[75071]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:00 compute-0 sudo[150548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sudrvpqbqzlfwqhboczavfedupcueest ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598040.1443179-251-59334638311543/AnsiballZ_file.py'
Dec 13 03:54:00 compute-0 sudo[150548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:00 compute-0 python3.9[150550]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:00 compute-0 sudo[150548]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:01 compute-0 sudo[150700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xksbtngwnrskmavfgiiqvjffiaombrjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598040.7794466-259-105749732294104/AnsiballZ_stat.py'
Dec 13 03:54:01 compute-0 sudo[150700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:01 compute-0 python3.9[150702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:01 compute-0 sudo[150700]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:01 compute-0 sudo[150778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpeieudhfrfgyytrxdfvnyyhbrcovyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598040.7794466-259-105749732294104/AnsiballZ_file.py'
Dec 13 03:54:01 compute-0 sudo[150778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:01 compute-0 python3.9[150780]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:01 compute-0 sudo[150778]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:02 compute-0 ceph-mon[75071]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:02 compute-0 sudo[150930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlzxzjhnpscaggefniviygvrblcywmls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598041.9030285-271-31083103177851/AnsiballZ_stat.py'
Dec 13 03:54:02 compute-0 sudo[150930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:02 compute-0 python3.9[150932]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:02 compute-0 sudo[150930]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:02 compute-0 sudo[151008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyhjtbhudnludujgengmyzwpmefglzwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598041.9030285-271-31083103177851/AnsiballZ_file.py'
Dec 13 03:54:02 compute-0 sudo[151008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:02 compute-0 python3.9[151010]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:02 compute-0 sudo[151008]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:03 compute-0 sudo[151160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijowtthxnckfannxrxymnyssydzyahhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598042.9721518-283-120366428489362/AnsiballZ_systemd.py'
Dec 13 03:54:03 compute-0 sudo[151160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:03 compute-0 python3.9[151162]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:03 compute-0 systemd[1]: Reloading.
Dec 13 03:54:03 compute-0 systemd-rc-local-generator[151189]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:03 compute-0 systemd-sysv-generator[151192]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:03 compute-0 sudo[151160]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:04 compute-0 ceph-mon[75071]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:04 compute-0 sudo[151348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmkxzwtonpdcnensgiwuhiqqwutwcdmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598044.066424-291-66543701022171/AnsiballZ_stat.py'
Dec 13 03:54:04 compute-0 sudo[151348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:04 compute-0 python3.9[151350]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:04 compute-0 sudo[151348]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:04 compute-0 sudo[151426]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cenhhtwjsetkpfhyslqdmnzmnpaauezv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598044.066424-291-66543701022171/AnsiballZ_file.py'
Dec 13 03:54:04 compute-0 sudo[151426]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:05 compute-0 python3.9[151428]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:05 compute-0 sudo[151426]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:05 compute-0 sudo[151578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awtrgrfrcuiwowfhnfzpimypfjgravhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598045.1917267-303-94153067082239/AnsiballZ_stat.py'
Dec 13 03:54:05 compute-0 sudo[151578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:05 compute-0 python3.9[151580]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:05 compute-0 sudo[151578]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:05 compute-0 ceph-mon[75071]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:05 compute-0 sudo[151656]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsbwdxzagpmuytkzwfpwgtlqlsvkzoij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598045.1917267-303-94153067082239/AnsiballZ_file.py'
Dec 13 03:54:05 compute-0 sudo[151656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:06 compute-0 python3.9[151658]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:06 compute-0 sudo[151656]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:06 compute-0 sudo[151808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpzledwwephhoyrjqkuqpeedeqzxewup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598046.3098564-315-23306997920989/AnsiballZ_systemd.py'
Dec 13 03:54:06 compute-0 sudo[151808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:06 compute-0 python3.9[151810]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:06 compute-0 systemd[1]: Reloading.
Dec 13 03:54:07 compute-0 systemd-rc-local-generator[151837]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:07 compute-0 systemd-sysv-generator[151841]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:07 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 03:54:07 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 03:54:07 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 03:54:07 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 03:54:07 compute-0 sudo[151808]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:07 compute-0 sudo[152001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqtilfqmloihvmkwiixlkdmrmyyumbsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598047.5849445-325-66867404090732/AnsiballZ_file.py'
Dec 13 03:54:07 compute-0 sudo[152001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:07 compute-0 ceph-mon[75071]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:08 compute-0 python3.9[152003]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:54:08 compute-0 sudo[152001]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:08 compute-0 sudo[152153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdzclykrmtsnrcdnwhijawjaihhmsprx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598048.2695587-333-68298729245729/AnsiballZ_stat.py'
Dec 13 03:54:08 compute-0 sudo[152153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:08 compute-0 python3.9[152155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:08 compute-0 sudo[152153]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:09 compute-0 sudo[152276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqaobkhmokyckuvfqeiqqxajcblsvvxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598048.2695587-333-68298729245729/AnsiballZ_copy.py'
Dec 13 03:54:09 compute-0 sudo[152276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:09 compute-0 python3.9[152278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598048.2695587-333-68298729245729/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:54:09 compute-0 sudo[152276]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:09 compute-0 sudo[152428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbannkuyxnaxaijtjfobmzwqlpgfqeyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598049.65883-350-67703214007295/AnsiballZ_file.py'
Dec 13 03:54:09 compute-0 sudo[152428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:10 compute-0 ceph-mon[75071]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:10 compute-0 python3.9[152430]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:54:10 compute-0 sudo[152428]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:10 compute-0 sudo[152580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttngfuiliovypkrimmyflgljwfqxbppf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598050.4191942-358-249354865203788/AnsiballZ_stat.py'
Dec 13 03:54:10 compute-0 sudo[152580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:10 compute-0 python3.9[152582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:54:10 compute-0 sudo[152580]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:11 compute-0 sudo[152703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfcnzemkpkauhnhiydvhghufskjvczsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598050.4191942-358-249354865203788/AnsiballZ_copy.py'
Dec 13 03:54:11 compute-0 sudo[152703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:11 compute-0 python3.9[152705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598050.4191942-358-249354865203788/.source.json _original_basename=.rcc6yyuk follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:11 compute-0 sudo[152703]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:11 compute-0 sudo[152855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvqgyzajmtrhwyzgqwthemoboapofuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598051.6252177-373-272750115206893/AnsiballZ_file.py'
Dec 13 03:54:11 compute-0 sudo[152855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:12 compute-0 ceph-mon[75071]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:12 compute-0 python3.9[152857]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:12 compute-0 sudo[152855]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:12 compute-0 sudo[153007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhslsmnfsajndjeuslchktsmzsfuvaud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598052.3176596-381-42538591797971/AnsiballZ_stat.py'
Dec 13 03:54:12 compute-0 sudo[153007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:12 compute-0 sudo[153007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:13 compute-0 sudo[153130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcgiyyhzqtngebpypehsrhtdzexeujhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598052.3176596-381-42538591797971/AnsiballZ_copy.py'
Dec 13 03:54:13 compute-0 sudo[153130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:13 compute-0 sudo[153130]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:14 compute-0 sudo[153282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxdyhgcofbkjcvbabtxvxtkgkjphgwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598053.6016734-398-139815309406326/AnsiballZ_container_config_data.py'
Dec 13 03:54:14 compute-0 sudo[153282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:14 compute-0 ceph-mon[75071]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:14 compute-0 python3.9[153284]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec 13 03:54:14 compute-0 sudo[153282]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:14 compute-0 sudo[153434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axpjyioxoascytsawshctqumwesjynkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598054.4814258-407-378691793200/AnsiballZ_container_config_hash.py'
Dec 13 03:54:14 compute-0 sudo[153434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:15 compute-0 python3.9[153436]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 03:54:15 compute-0 sudo[153434]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:15 compute-0 sudo[153586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkgrthngzyvyhdlvtuuncdujyadgmxsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598055.3676054-416-29422783302213/AnsiballZ_podman_container_info.py'
Dec 13 03:54:15 compute-0 sudo[153586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:16 compute-0 python3.9[153588]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 03:54:16 compute-0 ceph-mon[75071]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:16 compute-0 sudo[153586]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:17 compute-0 sudo[153764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keyhjnuwdowbsrdrovazhkiuhfiypnyx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765598056.923651-429-200774549722009/AnsiballZ_edpm_container_manage.py'
Dec 13 03:54:17 compute-0 sudo[153764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:17 compute-0 python3[153766]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 03:54:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:19 compute-0 ceph-mon[75071]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:20 compute-0 ceph-mon[75071]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:21 compute-0 ceph-mon[75071]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:24 compute-0 ceph-mon[75071]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:27 compute-0 ceph-mon[75071]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:27 compute-0 podman[153850]: 2025-12-13 03:54:27.28500192 +0000 UTC m=+1.423986114 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 03:54:27 compute-0 podman[153779]: 2025-12-13 03:54:27.492392954 +0000 UTC m=+9.721157426 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 03:54:27 compute-0 podman[153925]: 2025-12-13 03:54:27.640926319 +0000 UTC m=+0.054150464 container create e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:27 compute-0 podman[153925]: 2025-12-13 03:54:27.611211232 +0000 UTC m=+0.024435397 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 03:54:27 compute-0 python3[153766]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 03:54:27 compute-0 sudo[153764]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:28 compute-0 ceph-mon[75071]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:28 compute-0 sudo[154114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uyfjtpgivavpxrbvbhdzzvkveildvtln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598067.9752998-437-213635754021369/AnsiballZ_stat.py'
Dec 13 03:54:28 compute-0 sudo[154114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:28 compute-0 python3.9[154116]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:54:28 compute-0 sudo[154114]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:29 compute-0 sudo[154268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfvfxytggxeeiduvchzmktoulwvehqjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598068.7390344-446-41623432364497/AnsiballZ_file.py'
Dec 13 03:54:29 compute-0 sudo[154268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:29 compute-0 python3.9[154270]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:29 compute-0 sudo[154268]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:29 compute-0 sudo[154344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byynolkycllommxcnkryemkkyyeqdcnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598068.7390344-446-41623432364497/AnsiballZ_stat.py'
Dec 13 03:54:29 compute-0 sudo[154344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:29 compute-0 python3.9[154346]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:54:29 compute-0 sudo[154344]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:30 compute-0 ceph-mon[75071]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:30 compute-0 sudo[154495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvxrwpujctvkadddwfkznikxjviwwazc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598069.7993593-446-222393596991521/AnsiballZ_copy.py'
Dec 13 03:54:30 compute-0 sudo[154495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:30 compute-0 python3.9[154497]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765598069.7993593-446-222393596991521/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:30 compute-0 sudo[154495]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:30 compute-0 sudo[154571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnlxmxixszwisohkxuiaxpyporqclkny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598069.7993593-446-222393596991521/AnsiballZ_systemd.py'
Dec 13 03:54:30 compute-0 sudo[154571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:31 compute-0 python3.9[154573]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 03:54:31 compute-0 systemd[1]: Reloading.
Dec 13 03:54:31 compute-0 systemd-rc-local-generator[154599]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:31 compute-0 systemd-sysv-generator[154603]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:31 compute-0 sudo[154571]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:31 compute-0 sudo[154682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whjfvqotpwniaiktqzhaaucohziffupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598069.7993593-446-222393596991521/AnsiballZ_systemd.py'
Dec 13 03:54:31 compute-0 sudo[154682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:31 compute-0 sudo[154683]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:54:31 compute-0 sudo[154683]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:31 compute-0 sudo[154683]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:31 compute-0 sudo[154710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 03:54:31 compute-0 sudo[154710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:31 compute-0 python3.9[154690]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:32 compute-0 systemd[1]: Reloading.
Dec 13 03:54:32 compute-0 systemd-rc-local-generator[154778]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:32 compute-0 systemd-sysv-generator[154781]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:32 compute-0 ceph-mon[75071]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:32 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Dec 13 03:54:32 compute-0 sudo[154710]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:54:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:54:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2fc3f5e56df7141ccaab238bae3fb877f291045c240986ce4fc813f62b83d9/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa2fc3f5e56df7141ccaab238bae3fb877f291045c240986ce4fc813f62b83d9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670.
Dec 13 03:54:32 compute-0 sudo[154813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:54:32 compute-0 sudo[154813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:32 compute-0 sudo[154813]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:32 compute-0 podman[154794]: 2025-12-13 03:54:32.623377832 +0000 UTC m=+0.246911545 container init e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + sudo -E kolla_set_configs
Dec 13 03:54:32 compute-0 podman[154794]: 2025-12-13 03:54:32.655026591 +0000 UTC m=+0.278560294 container start e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 03:54:32 compute-0 sudo[154840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:54:32 compute-0 sudo[154840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Validating config file
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Copying service configuration files
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Writing out command to execute
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: ++ cat /run_command
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + CMD=neutron-ovn-metadata-agent
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + ARGS=
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + sudo kolla_copy_cacerts
Dec 13 03:54:32 compute-0 edpm-start-podman-container[154794]: ovn_metadata_agent
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: Running command: 'neutron-ovn-metadata-agent'
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + [[ ! -n '' ]]
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + . kolla_extend_start
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + umask 0022
Dec 13 03:54:32 compute-0 ovn_metadata_agent[154810]: + exec neutron-ovn-metadata-agent
Dec 13 03:54:32 compute-0 podman[154846]: 2025-12-13 03:54:32.806932127 +0000 UTC m=+0.139420572 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Dec 13 03:54:32 compute-0 edpm-start-podman-container[154791]: Creating additional drop-in dependency for "ovn_metadata_agent" (e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670)
Dec 13 03:54:32 compute-0 systemd[1]: Reloading.
Dec 13 03:54:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:32 compute-0 systemd-sysv-generator[154942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:32 compute-0 systemd-rc-local-generator[154939]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:33 compute-0 systemd[1]: Started ovn_metadata_agent container.
Dec 13 03:54:33 compute-0 sudo[154682]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:33 compute-0 sudo[154840]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:54:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:33 compute-0 sudo[155004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:54:33 compute-0 sudo[155004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:33 compute-0 sudo[155004]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:33 compute-0 ceph-mon[75071]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:54:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:54:33 compute-0 sudo[155029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:54:33 compute-0 sudo[155029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:33 compute-0 sshd-session[146369]: Connection closed by 192.168.122.30 port 48090
Dec 13 03:54:33 compute-0 sshd-session[146366]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:54:33 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Dec 13 03:54:33 compute-0 systemd[1]: session-48.scope: Consumed 57.971s CPU time.
Dec 13 03:54:33 compute-0 systemd-logind[796]: Session 48 logged out. Waiting for processes to exit.
Dec 13 03:54:33 compute-0 systemd-logind[796]: Removed session 48.
Dec 13 03:54:33 compute-0 podman[155067]: 2025-12-13 03:54:33.901250125 +0000 UTC m=+0.071965572 container create e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:54:33 compute-0 podman[155067]: 2025-12-13 03:54:33.858455577 +0000 UTC m=+0.029171054 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:33 compute-0 systemd[1]: Started libpod-conmon-e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141.scope.
Dec 13 03:54:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:34 compute-0 podman[155067]: 2025-12-13 03:54:34.006249733 +0000 UTC m=+0.176965200 container init e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 03:54:34 compute-0 podman[155067]: 2025-12-13 03:54:34.014900475 +0000 UTC m=+0.185615942 container start e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:54:34 compute-0 podman[155067]: 2025-12-13 03:54:34.021096001 +0000 UTC m=+0.191811458 container attach e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:54:34 compute-0 youthful_raman[155084]: 167 167
Dec 13 03:54:34 compute-0 systemd[1]: libpod-e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141.scope: Deactivated successfully.
Dec 13 03:54:34 compute-0 podman[155067]: 2025-12-13 03:54:34.024560913 +0000 UTC m=+0.195276370 container died e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:54:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd00adca36fe1f81cf567c71839f6a4385127682accc1e8b18e8a77689572c2b-merged.mount: Deactivated successfully.
Dec 13 03:54:34 compute-0 podman[155067]: 2025-12-13 03:54:34.08219976 +0000 UTC m=+0.252915207 container remove e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_raman, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 03:54:34 compute-0 systemd[1]: libpod-conmon-e650dac5c4cf57c4c20c38aa1e4229ec1ded109234cedf24b781c4ff15a0a141.scope: Deactivated successfully.
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.267708677 +0000 UTC m=+0.060065832 container create ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.233234372 +0000 UTC m=+0.025591547 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:34 compute-0 systemd[1]: Started libpod-conmon-ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5.scope.
Dec 13 03:54:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.429116628 +0000 UTC m=+0.221473813 container init ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.438760646 +0000 UTC m=+0.231117801 container start ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.464433545 +0000 UTC m=+0.256790720 container attach ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:54:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:34 compute-0 loving_snyder[155124]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:54:34 compute-0 loving_snyder[155124]: --> All data devices are unavailable
Dec 13 03:54:34 compute-0 systemd[1]: libpod-ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5.scope: Deactivated successfully.
Dec 13 03:54:34 compute-0 podman[155107]: 2025-12-13 03:54:34.967409329 +0000 UTC m=+0.759766504 container died ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.008 154842 INFO neutron.common.config [-] Logging enabled!
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.009 154842 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.009 154842 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.010 154842 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.011 154842 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.012 154842 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-565dbe3734578cef34721773b2b97ce157cfaa958eabfefbb39f05d20f6650b0-merged.mount: Deactivated successfully.
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.013 154842 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.014 154842 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.015 154842 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.015 154842 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.015 154842 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.015 154842 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.015 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.016 154842 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.017 154842 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.018 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.019 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.020 154842 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.021 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.022 154842 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.023 154842 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.024 154842 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.025 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.026 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.027 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.028 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.029 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.030 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.031 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.032 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.033 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.034 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.035 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.036 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.037 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.038 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.039 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.040 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.041 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.042 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.043 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.044 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.045 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.046 154842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.047 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.048 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.049 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.050 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.051 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.052 154842 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 03:54:35 compute-0 podman[155107]: 2025-12-13 03:54:35.061799191 +0000 UTC m=+0.854156346 container remove ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.063 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.064 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.064 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.064 154842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.064 154842 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Dec 13 03:54:35 compute-0 systemd[1]: libpod-conmon-ace9bb5c1d6e1542ab5cefa9a86fe47210c45053c778bf00fe4a681a468095e5.scope: Deactivated successfully.
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.079 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9c764fca-6428-461c-aead-7964805997a5 (UUID: 9c764fca-6428-461c-aead-7964805997a5) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.100 154842 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.101 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.101 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.101 154842 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.105 154842 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 03:54:35 compute-0 sudo[155029]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.113 154842 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.120 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9c764fca-6428-461c-aead-7964805997a5'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], external_ids={}, name=9c764fca-6428-461c-aead-7964805997a5, nb_cfg_timestamp=1765598013143, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.122 154842 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fd7e68a3df0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.123 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.123 154842 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.124 154842 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.124 154842 INFO oslo_service.service [-] Starting 1 workers
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.130 154842 DEBUG oslo_service.service [-] Started child 155161 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.134 154842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpb0_q3bl_/privsep.sock']
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.135 155161 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-244215'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.161 155161 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.162 155161 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.162 155161 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.165 155161 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.171 155161 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec 13 03:54:35 compute-0 sudo[155158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.178 155161 INFO eventlet.wsgi.server [-] (155161) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Dec 13 03:54:35 compute-0 sudo[155158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:35 compute-0 sudo[155158]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:35 compute-0 sudo[155187]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:54:35 compute-0 sudo[155187]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.555756234 +0000 UTC m=+0.048544024 container create 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:54:35 compute-0 systemd[1]: Started libpod-conmon-57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096.scope.
Dec 13 03:54:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.530615929 +0000 UTC m=+0.023403749 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.640069316 +0000 UTC m=+0.132857136 container init 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.649519889 +0000 UTC m=+0.142307689 container start 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:54:35 compute-0 practical_fermat[155241]: 167 167
Dec 13 03:54:35 compute-0 systemd[1]: libpod-57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096.scope: Deactivated successfully.
Dec 13 03:54:35 compute-0 conmon[155241]: conmon 57a959fffa6fb24ab234 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096.scope/container/memory.events
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.658956462 +0000 UTC m=+0.151744282 container attach 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.659567469 +0000 UTC m=+0.152355269 container died 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:54:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-6e1519a2f150a13d40b4d0d9813a83fcaf1636bdcb88538e79631ed36e0fa0a3-merged.mount: Deactivated successfully.
Dec 13 03:54:35 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec 13 03:54:35 compute-0 podman[155225]: 2025-12-13 03:54:35.800948871 +0000 UTC m=+0.293736671 container remove 57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_fermat, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:54:35 compute-0 systemd[1]: libpod-conmon-57a959fffa6fb24ab2346386fbc5770e71614dad6002be10ee0072ec7a428096.scope: Deactivated successfully.
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.936 154842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.937 154842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpb0_q3bl_/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.752 155258 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.756 155258 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.758 155258 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.758 155258 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155258
Dec 13 03:54:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:35.942 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[85e9489c-c9d8-4af9-970c-202db9717926]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 03:54:36 compute-0 ceph-mon[75071]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:35.958007135 +0000 UTC m=+0.025704291 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:36.27168279 +0000 UTC m=+0.339379916 container create 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:54:36 compute-0 systemd[1]: Started libpod-conmon-3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295.scope.
Dec 13 03:54:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec5820c4abbe37875beb1ffd215e8da2a6d8c593bc1ee3bea22396ae6d0cb9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec5820c4abbe37875beb1ffd215e8da2a6d8c593bc1ee3bea22396ae6d0cb9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec5820c4abbe37875beb1ffd215e8da2a6d8c593bc1ee3bea22396ae6d0cb9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2eec5820c4abbe37875beb1ffd215e8da2a6d8c593bc1ee3bea22396ae6d0cb9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:36.51093353 +0000 UTC m=+0.578630676 container init 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:54:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:36.516 155258 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:54:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:36.517 155258 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:54:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:36.517 155258 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:36.523923988 +0000 UTC m=+0.591621114 container start 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:36.527556176 +0000 UTC m=+0.595253332 container attach 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]: {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     "0": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "devices": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "/dev/loop3"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             ],
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_name": "ceph_lv0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_size": "21470642176",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "name": "ceph_lv0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "tags": {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_name": "ceph",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.crush_device_class": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.encrypted": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.objectstore": "bluestore",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_id": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.vdo": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.with_tpm": "0"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             },
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "vg_name": "ceph_vg0"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         }
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     ],
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     "1": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "devices": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "/dev/loop4"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             ],
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_name": "ceph_lv1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_size": "21470642176",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "name": "ceph_lv1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "tags": {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_name": "ceph",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.crush_device_class": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.encrypted": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.objectstore": "bluestore",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_id": "1",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.vdo": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.with_tpm": "0"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             },
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "vg_name": "ceph_vg1"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         }
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     ],
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     "2": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "devices": [
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "/dev/loop5"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             ],
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_name": "ceph_lv2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_size": "21470642176",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "name": "ceph_lv2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "tags": {
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.cluster_name": "ceph",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.crush_device_class": "",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.encrypted": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.objectstore": "bluestore",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osd_id": "2",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.vdo": "0",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:                 "ceph.with_tpm": "0"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             },
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "type": "block",
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:             "vg_name": "ceph_vg2"
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:         }
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]:     ]
Dec 13 03:54:36 compute-0 compassionate_kirch[155287]: }
Dec 13 03:54:36 compute-0 systemd[1]: libpod-3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295.scope: Deactivated successfully.
Dec 13 03:54:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:36 compute-0 podman[155266]: 2025-12-13 03:54:36.870025703 +0000 UTC m=+0.937722849 container died 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-2eec5820c4abbe37875beb1ffd215e8da2a6d8c593bc1ee3bea22396ae6d0cb9-merged.mount: Deactivated successfully.
Dec 13 03:54:37 compute-0 podman[155266]: 2025-12-13 03:54:37.062381684 +0000 UTC m=+1.130078810 container remove 3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:54:37 compute-0 systemd[1]: libpod-conmon-3f6b6dc36c68d7fce63a0606349e20195b93cc767b4343514c9ff4521a4ac295.scope: Deactivated successfully.
Dec 13 03:54:37 compute-0 sudo[155187]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.154 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[4d391974-1658-48b8-b3e4-ca3e9847e0d6]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.157 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, column=external_ids, values=({'neutron:ovn-metadata-id': 'b80fcf08-fd10-5b49-a56e-e0e2d7d3ed45'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.165 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 03:54:37 compute-0 sudo[155310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.171 154842 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.172 154842 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.173 154842 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.174 154842 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.174 154842 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.174 154842 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.174 154842 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.174 154842 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.176 154842 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.176 154842 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 sudo[155310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.176 154842 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.176 154842 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.176 154842 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.177 154842 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.178 154842 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.178 154842 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.178 154842 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.178 154842 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.178 154842 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.179 154842 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 sudo[155310]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.180 154842 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.181 154842 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.182 154842 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.183 154842 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.184 154842 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.185 154842 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.186 154842 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.187 154842 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.188 154842 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.189 154842 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.190 154842 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.191 154842 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.192 154842 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.193 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.194 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.195 154842 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.196 154842 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.197 154842 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.198 154842 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.199 154842 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.200 154842 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.201 154842 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.202 154842 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.203 154842 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.204 154842 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.205 154842 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.206 154842 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.207 154842 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.208 154842 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.209 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.210 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.211 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.212 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 03:54:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:54:37.213 154842 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 03:54:37 compute-0 sudo[155335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:54:37 compute-0 sudo[155335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.549129552 +0000 UTC m=+0.044028711 container create 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Dec 13 03:54:37 compute-0 systemd[1]: Started libpod-conmon-74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae.scope.
Dec 13 03:54:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.530861843 +0000 UTC m=+0.025761022 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.63475604 +0000 UTC m=+0.129655199 container init 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.641542782 +0000 UTC m=+0.136441941 container start 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.645503319 +0000 UTC m=+0.140402508 container attach 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:54:37 compute-0 cranky_albattani[155389]: 167 167
Dec 13 03:54:37 compute-0 systemd[1]: libpod-74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae.scope: Deactivated successfully.
Dec 13 03:54:37 compute-0 conmon[155389]: conmon 74803a2eeb0b2272e031 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae.scope/container/memory.events
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.650926924 +0000 UTC m=+0.145826083 container died 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:54:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-5461ea54caa6b9ea6c4248a8943897dc22255770813e1c439de5a17df80205c3-merged.mount: Deactivated successfully.
Dec 13 03:54:37 compute-0 podman[155373]: 2025-12-13 03:54:37.692666284 +0000 UTC m=+0.187565443 container remove 74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:54:37 compute-0 systemd[1]: libpod-conmon-74803a2eeb0b2272e03192d5a444833a9b926399545acba4141df849f69955ae.scope: Deactivated successfully.
Dec 13 03:54:37 compute-0 podman[155412]: 2025-12-13 03:54:37.868245254 +0000 UTC m=+0.051356658 container create a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:54:37 compute-0 systemd[1]: Started libpod-conmon-a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7.scope.
Dec 13 03:54:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:54:37 compute-0 podman[155412]: 2025-12-13 03:54:37.843171311 +0000 UTC m=+0.026282745 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a421ce2ea845e5ed2128c6a80264c30487462dcf800c0899eb2642904ace131f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a421ce2ea845e5ed2128c6a80264c30487462dcf800c0899eb2642904ace131f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a421ce2ea845e5ed2128c6a80264c30487462dcf800c0899eb2642904ace131f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a421ce2ea845e5ed2128c6a80264c30487462dcf800c0899eb2642904ace131f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:54:37 compute-0 podman[155412]: 2025-12-13 03:54:37.954703464 +0000 UTC m=+0.137814888 container init a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 03:54:37 compute-0 podman[155412]: 2025-12-13 03:54:37.965345779 +0000 UTC m=+0.148457193 container start a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:54:37 compute-0 podman[155412]: 2025-12-13 03:54:37.96981982 +0000 UTC m=+0.152931234 container attach a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:54:38 compute-0 ceph-mon[75071]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:54:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5663 writes, 25K keys, 5663 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5663 writes, 905 syncs, 6.26 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5663 writes, 25K keys, 5663 commit groups, 1.0 writes per commit group, ingest: 18.97 MB, 0.03 MB/s
                                           Interval WAL: 5663 writes, 905 syncs, 6.26 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 1.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 6.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:54:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:38 compute-0 lvm[155510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:54:38 compute-0 lvm[155511]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:54:38 compute-0 lvm[155510]: VG ceph_vg0 finished
Dec 13 03:54:38 compute-0 lvm[155511]: VG ceph_vg1 finished
Dec 13 03:54:38 compute-0 lvm[155513]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:54:38 compute-0 lvm[155513]: VG ceph_vg2 finished
Dec 13 03:54:38 compute-0 sshd-session[155505]: Accepted publickey for zuul from 192.168.122.30 port 52608 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:54:38 compute-0 systemd-logind[796]: New session 49 of user zuul.
Dec 13 03:54:38 compute-0 systemd[1]: Started Session 49 of User zuul.
Dec 13 03:54:38 compute-0 sshd-session[155505]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:54:38 compute-0 vibrant_albattani[155428]: {}
Dec 13 03:54:38 compute-0 systemd[1]: libpod-a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7.scope: Deactivated successfully.
Dec 13 03:54:38 compute-0 systemd[1]: libpod-a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7.scope: Consumed 1.442s CPU time.
Dec 13 03:54:38 compute-0 conmon[155428]: conmon a372bb3889182f1adeba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7.scope/container/memory.events
Dec 13 03:54:38 compute-0 podman[155412]: 2025-12-13 03:54:38.821875759 +0000 UTC m=+1.004987193 container died a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:54:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a421ce2ea845e5ed2128c6a80264c30487462dcf800c0899eb2642904ace131f-merged.mount: Deactivated successfully.
Dec 13 03:54:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:38 compute-0 podman[155412]: 2025-12-13 03:54:38.876791522 +0000 UTC m=+1.059902936 container remove a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_albattani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Dec 13 03:54:38 compute-0 systemd[1]: libpod-conmon-a372bb3889182f1adeba2066a84b836e3632ccfcd1e0e89d3ed4fe3a50f1cee7.scope: Deactivated successfully.
Dec 13 03:54:38 compute-0 sudo[155335]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:54:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:54:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:39 compute-0 sudo[155581]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:54:39 compute-0 sudo[155581]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:54:39 compute-0 sudo[155581]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:39 compute-0 python3.9[155703]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:54:39 compute-0 ceph-mon[75071]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:54:40
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'images', 'default.rgw.control', 'volumes', '.rgw.root', 'vms', 'cephfs.cephfs.data']
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:54:40 compute-0 sudo[155857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmzfzubvgdtayqgyhxnyzbxehruxmndi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598080.353676-34-257922061964334/AnsiballZ_command.py'
Dec 13 03:54:40 compute-0 sudo[155857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:40 compute-0 python3.9[155859]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:54:41 compute-0 sudo[155857]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:41 compute-0 sudo[156022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irdltwxviaauvzxetvsfxjbpkicwxznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598081.326879-45-66291493865660/AnsiballZ_systemd_service.py'
Dec 13 03:54:41 compute-0 sudo[156022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:54:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:54:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Cumulative writes: 8249 writes, 34K keys, 8249 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s
                                           Cumulative WAL: 8249 writes, 1692 syncs, 4.88 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 8249 writes, 34K keys, 8249 commit groups, 1.0 writes per commit group, ingest: 21.08 MB, 0.04 MB/s
                                           Interval WAL: 8249 writes, 1692 syncs, 4.88 writes per sync, written: 0.02 GB, 0.04 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 601.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 8.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:54:44 compute-0 ceph-mon[75071]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:45 compute-0 ceph-mon[75071]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:45 compute-0 ceph-mon[75071]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:45 compute-0 python3.9[156024]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 03:54:45 compute-0 systemd[1]: Reloading.
Dec 13 03:54:45 compute-0 systemd-rc-local-generator[156081]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:54:45 compute-0 systemd-sysv-generator[156084]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:54:46 compute-0 sudo[156022]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:46 compute-0 python3.9[156239]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:54:46 compute-0 network[156256]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:54:46 compute-0 network[156257]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:54:46 compute-0 network[156258]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:54:47 compute-0 ceph-mon[75071]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:49 compute-0 ceph-mon[75071]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:50 compute-0 sudo[156518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txpettczvcojhdbqmbqvkiyvfqpiixzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598089.8991616-64-115051630306084/AnsiballZ_systemd_service.py'
Dec 13 03:54:50 compute-0 sudo[156518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 03:54:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Cumulative writes: 5446 writes, 24K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5446 writes, 793 syncs, 6.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5446 writes, 24K keys, 5446 commit groups, 1.0 writes per commit group, ingest: 18.63 MB, 0.03 MB/s
                                           Interval WAL: 5446 writes, 793 syncs, 6.87 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 03:54:50 compute-0 python3.9[156520]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:50 compute-0 sudo[156518]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:50 compute-0 sudo[156671]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozzuuzsjahezggfhfxwkeavampmdjkrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598090.6698072-64-29016935601678/AnsiballZ_systemd_service.py'
Dec 13 03:54:50 compute-0 sudo[156671]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:51 compute-0 python3.9[156673]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:51 compute-0 sudo[156671]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:51 compute-0 sudo[156824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdyglevscbnlvfkylwyqsneawxfiuapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598091.4246447-64-100223540391628/AnsiballZ_systemd_service.py'
Dec 13 03:54:51 compute-0 sudo[156824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:51 compute-0 python3.9[156826]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:52 compute-0 ceph-mon[75071]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:52 compute-0 sudo[156824]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:54:52 compute-0 sudo[156977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayezdzpnvpjiyyqlxmzyxdmmutbejsod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598092.154752-64-86293570797703/AnsiballZ_systemd_service.py'
Dec 13 03:54:52 compute-0 sudo[156977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:52 compute-0 python3.9[156979]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:52 compute-0 sudo[156977]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:53 compute-0 sudo[157130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahtahnhyykmtblguawlbwrxsgutwaypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598092.950369-64-275201625644131/AnsiballZ_systemd_service.py'
Dec 13 03:54:53 compute-0 sudo[157130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:53 compute-0 python3.9[157132]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:53 compute-0 sudo[157130]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:54 compute-0 sudo[157283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtxyqlxzvtowdneoyrlvhczklpvdazg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598093.737947-64-101583278286925/AnsiballZ_systemd_service.py'
Dec 13 03:54:54 compute-0 sudo[157283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:54 compute-0 ceph-mon[75071]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:54 compute-0 python3.9[157285]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:54 compute-0 sudo[157283]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:54 compute-0 sudo[157436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhswwjtzqaajkaxfqcshtfozjqawcnpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598094.488149-64-92414301459225/AnsiballZ_systemd_service.py'
Dec 13 03:54:54 compute-0 sudo[157436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:55 compute-0 python3.9[157438]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:54:55 compute-0 sudo[157436]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:55 compute-0 sudo[157589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cytjokrcprpimlajscdkgyzumxjwplhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598095.3807077-116-225018623054541/AnsiballZ_file.py'
Dec 13 03:54:55 compute-0 sudo[157589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:56 compute-0 python3.9[157591]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:56 compute-0 sudo[157589]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:56 compute-0 ceph-mon[75071]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:56 compute-0 sudo[157741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldgyqbqicftgdjwlkhdgtqbfqzuqaxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598096.1501555-116-208834507993640/AnsiballZ_file.py'
Dec 13 03:54:56 compute-0 sudo[157741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:56 compute-0 python3.9[157743]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:56 compute-0 sudo[157741]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:57 compute-0 sudo[157893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvnzsinmjtujqtunvktwzhekjgxtixfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598096.75873-116-121823300411700/AnsiballZ_file.py'
Dec 13 03:54:57 compute-0 sudo[157893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:57 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 03:54:57 compute-0 python3.9[157895]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:57 compute-0 sudo[157893]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:57 compute-0 sudo[158055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crkoccfdgoyosxuoikaymitxlvhqvrvq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598097.3906531-116-270017668275772/AnsiballZ_file.py'
Dec 13 03:54:57 compute-0 sudo[158055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:57 compute-0 podman[158019]: 2025-12-13 03:54:57.725821956 +0000 UTC m=+0.108935654 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec 13 03:54:57 compute-0 python3.9[158064]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:57 compute-0 sudo[158055]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:58 compute-0 ceph-mon[75071]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:58 compute-0 sudo[158224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgzkfinbpqgyaoskwkeybxtfdenpivxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598098.1740136-116-192168065041838/AnsiballZ_file.py'
Dec 13 03:54:58 compute-0 sudo[158224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:58 compute-0 python3.9[158226]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:58 compute-0 sudo[158224]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:54:59 compute-0 sudo[158376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uneuooxvxgirkqtubhmsesewaqmdxbmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598098.8206918-116-5176588574472/AnsiballZ_file.py'
Dec 13 03:54:59 compute-0 sudo[158376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:59 compute-0 python3.9[158378]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:59 compute-0 sudo[158376]: pam_unix(sudo:session): session closed for user root
Dec 13 03:54:59 compute-0 sudo[158528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-boktaswmzkdvurxoeffwnmblxudmytww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598099.443081-116-167879077290693/AnsiballZ_file.py'
Dec 13 03:54:59 compute-0 sudo[158528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:54:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:54:59 compute-0 python3.9[158530]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:54:59 compute-0 sudo[158528]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:00 compute-0 ceph-mon[75071]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:00 compute-0 sudo[158680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijupndcuvigiesmjsfxzitwphdyeiwsr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598100.0813048-166-94508178282837/AnsiballZ_file.py'
Dec 13 03:55:00 compute-0 sudo[158680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:00 compute-0 python3.9[158682]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:00 compute-0 sudo[158680]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:00 compute-0 sudo[158832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxytiwhwjbvesoufqcqbqzpwdjczcicn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598100.687472-166-199128127287223/AnsiballZ_file.py'
Dec 13 03:55:00 compute-0 sudo[158832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:01 compute-0 python3.9[158834]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:01 compute-0 sudo[158832]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:01 compute-0 sudo[158984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwibourtsqzjrdhubwcfbequksbrmlnw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598101.329666-166-68417534312489/AnsiballZ_file.py'
Dec 13 03:55:01 compute-0 sudo[158984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:01 compute-0 python3.9[158986]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:01 compute-0 sudo[158984]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:02 compute-0 sudo[159136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njnngtxkiobzgmykshgeljrpackiozbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598102.0783386-166-146155011940754/AnsiballZ_file.py'
Dec 13 03:55:02 compute-0 sudo[159136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:02 compute-0 ceph-mon[75071]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:02 compute-0 python3.9[159138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:02 compute-0 sudo[159136]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:02 compute-0 podman[159224]: 2025-12-13 03:55:02.938246458 +0000 UTC m=+0.084347214 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:03 compute-0 sudo[159305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbzlqukfanwyihppuwgdfqevxbukuemx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598102.7218502-166-12868600306588/AnsiballZ_file.py'
Dec 13 03:55:03 compute-0 sudo[159305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:03 compute-0 python3.9[159307]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:03 compute-0 sudo[159305]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:03 compute-0 sudo[159457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnzavxycoganxsckdlcznarrmwtoplyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598103.3701763-166-15541201804483/AnsiballZ_file.py'
Dec 13 03:55:03 compute-0 sudo[159457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:03 compute-0 python3.9[159459]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:03 compute-0 sudo[159457]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:04 compute-0 sudo[159609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zevhlvrquczaownqwpcjtmpltmbsjowh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598104.0029945-166-34680385178463/AnsiballZ_file.py'
Dec 13 03:55:04 compute-0 sudo[159609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:04 compute-0 ceph-mon[75071]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:04 compute-0 python3.9[159611]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:55:04 compute-0 sudo[159609]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:04 compute-0 sudo[159761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvyppqvmbumylpehsrxjftvbelajcja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598104.7114334-217-120596890168041/AnsiballZ_command.py'
Dec 13 03:55:05 compute-0 sudo[159761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:05 compute-0 python3.9[159763]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:05 compute-0 sudo[159761]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:06 compute-0 python3.9[159915]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 03:55:06 compute-0 ceph-mon[75071]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:06 compute-0 sudo[160065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjzimagbnnamrjaayukmcdredbpvupwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598106.2788448-235-203487168827908/AnsiballZ_systemd_service.py'
Dec 13 03:55:06 compute-0 sudo[160065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:06 compute-0 python3.9[160067]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 03:55:06 compute-0 systemd[1]: Reloading.
Dec 13 03:55:07 compute-0 systemd-sysv-generator[160097]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:55:07 compute-0 systemd-rc-local-generator[160090]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:55:07 compute-0 sudo[160065]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:07 compute-0 sudo[160253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-paihhxjpyftxhywqaffivipxwkfkuigp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598107.5261168-243-260850695058193/AnsiballZ_command.py'
Dec 13 03:55:07 compute-0 sudo[160253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:07 compute-0 python3.9[160255]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:08 compute-0 sudo[160253]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:08 compute-0 ceph-mon[75071]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:08 compute-0 sudo[160406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgamylbwlzxmoudmacpyuvzbcswcfyty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598108.1536732-243-29987110003957/AnsiballZ_command.py'
Dec 13 03:55:08 compute-0 sudo[160406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:08 compute-0 python3.9[160408]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:08 compute-0 sudo[160406]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:09 compute-0 sudo[160559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcmeiaeyllqfaexnvxblnmjweitidgaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598108.7949922-243-278574137372225/AnsiballZ_command.py'
Dec 13 03:55:09 compute-0 sudo[160559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:09 compute-0 python3.9[160561]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:09 compute-0 sudo[160559]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:09 compute-0 ceph-mon[75071]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:09 compute-0 sudo[160712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsnzdaseehmaqiremeiqilvmqgqrwegk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598109.450133-243-275833330158821/AnsiballZ_command.py'
Dec 13 03:55:09 compute-0 sudo[160712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:09 compute-0 python3.9[160714]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:09 compute-0 sudo[160712]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:10 compute-0 sudo[160865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdxhgemlslnpggokelbufqchtzbtevns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598110.0917494-243-137531562891731/AnsiballZ_command.py'
Dec 13 03:55:10 compute-0 sudo[160865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:10 compute-0 python3.9[160867]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:10 compute-0 sudo[160865]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:11 compute-0 sudo[161018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghaubogmnvfnnttbnpxsvjzsqaywymna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598110.850236-243-132146359853125/AnsiballZ_command.py'
Dec 13 03:55:11 compute-0 sudo[161018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:11 compute-0 python3.9[161020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:11 compute-0 sudo[161018]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:11 compute-0 sudo[161171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opyagscedvgawijsxeoxfkukegwjgweo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598111.4893246-243-180857425050190/AnsiballZ_command.py'
Dec 13 03:55:11 compute-0 sudo[161171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:11 compute-0 python3.9[161173]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:55:11 compute-0 ceph-mon[75071]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:11 compute-0 sudo[161171]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:12 compute-0 sudo[161324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mozpyyhwszhxjobvtobwqmqyizwheelj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598112.3400137-297-106659438437431/AnsiballZ_getent.py'
Dec 13 03:55:12 compute-0 sudo[161324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:12 compute-0 python3.9[161326]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec 13 03:55:13 compute-0 sudo[161324]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:13 compute-0 sudo[161477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrlmlxxhgzmobjqmnkmsslgariwwuicl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598113.1846035-305-247026080661110/AnsiballZ_group.py'
Dec 13 03:55:13 compute-0 sudo[161477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:13 compute-0 python3.9[161479]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 03:55:13 compute-0 groupadd[161480]: group added to /etc/group: name=libvirt, GID=42473
Dec 13 03:55:13 compute-0 groupadd[161480]: group added to /etc/gshadow: name=libvirt
Dec 13 03:55:13 compute-0 groupadd[161480]: new group: name=libvirt, GID=42473
Dec 13 03:55:13 compute-0 sudo[161477]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:13 compute-0 ceph-mon[75071]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:14 compute-0 sudo[161635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvncuurfkbrkmdrmrwddnqmqjwjkbht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598114.0899405-313-194592881343418/AnsiballZ_user.py'
Dec 13 03:55:14 compute-0 sudo[161635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:14 compute-0 python3.9[161637]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 03:55:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:14 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:55:14 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 03:55:15 compute-0 useradd[161639]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Dec 13 03:55:15 compute-0 sudo[161635]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:15 compute-0 sudo[161796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nerrpzekeuzugajvdlplukpghxnsykme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598115.584997-324-12161848929457/AnsiballZ_setup.py'
Dec 13 03:55:15 compute-0 sudo[161796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:15 compute-0 ceph-mon[75071]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:16 compute-0 python3.9[161798]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:55:16 compute-0 sudo[161796]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:16 compute-0 sudo[161880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yytzmvayjepiabjhxhbqzoupkzoixcmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598115.584997-324-12161848929457/AnsiballZ_dnf.py'
Dec 13 03:55:16 compute-0 sudo[161880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:55:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:17 compute-0 python3.9[161882]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:55:18 compute-0 ceph-mon[75071]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:20 compute-0 ceph-mon[75071]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:22 compute-0 ceph-mon[75071]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:24 compute-0 ceph-mon[75071]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:26 compute-0 ceph-mon[75071]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:27 compute-0 podman[161979]: 2025-12-13 03:55:27.952798515 +0000 UTC m=+0.094165508 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:55:28 compute-0 ceph-mon[75071]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:30 compute-0 ceph-mon[75071]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:32 compute-0 ceph-mon[75071]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:33 compute-0 podman[162094]: 2025-12-13 03:55:33.909093019 +0000 UTC m=+0.059887530 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:34 compute-0 ceph-mon[75071]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:55:35.066 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:55:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:55:35.067 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:55:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:55:35.067 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:55:36 compute-0 ceph-mon[75071]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:38 compute-0 ceph-mon[75071]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:39 compute-0 sudo[162120]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:55:39 compute-0 sudo[162120]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:39 compute-0 sudo[162120]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:39 compute-0 sudo[162145]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:55:39 compute-0 sudo[162145]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:39 compute-0 sudo[162145]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:55:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:39 compute-0 sudo[162201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:55:39 compute-0 sudo[162201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:39 compute-0 sudo[162201]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:39 compute-0 sudo[162226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:55:39 compute-0 sudo[162226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.172233929 +0000 UTC m=+0.047580423 container create fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:55:40 compute-0 systemd[1]: Started libpod-conmon-fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a.scope.
Dec 13 03:55:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.153500446 +0000 UTC m=+0.028846940 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.257252454 +0000 UTC m=+0.132598968 container init fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.264916654 +0000 UTC m=+0.140263148 container start fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.269411737 +0000 UTC m=+0.144758231 container attach fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:40 compute-0 systemd[1]: libpod-fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a.scope: Deactivated successfully.
Dec 13 03:55:40 compute-0 inspiring_hofstadter[162280]: 167 167
Dec 13 03:55:40 compute-0 conmon[162280]: conmon fd92a9e70f9abe099345 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a.scope/container/memory.events
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.277982561 +0000 UTC m=+0.153329055 container died fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-728cd9c52053ab4db78b2f1f3e791e338c7a4dd7da6fe7d7868504ed2a610b8a-merged.mount: Deactivated successfully.
Dec 13 03:55:40 compute-0 podman[162263]: 2025-12-13 03:55:40.334051215 +0000 UTC m=+0.209397719 container remove fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_hofstadter, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 03:55:40 compute-0 ceph-mon[75071]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:55:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:55:40 compute-0 systemd[1]: libpod-conmon-fd92a9e70f9abe0993453edb51b05910bc4dc621035c9cf977d9f9a99d70250a.scope: Deactivated successfully.
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:55:40
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'images', '.mgr', 'default.rgw.meta']
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:55:40 compute-0 podman[162306]: 2025-12-13 03:55:40.518341798 +0000 UTC m=+0.054801990 container create 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 03:55:40 compute-0 systemd[1]: Started libpod-conmon-0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1.scope.
Dec 13 03:55:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:40 compute-0 podman[162306]: 2025-12-13 03:55:40.49683655 +0000 UTC m=+0.033296762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:40 compute-0 podman[162306]: 2025-12-13 03:55:40.60797015 +0000 UTC m=+0.144430372 container init 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True)
Dec 13 03:55:40 compute-0 podman[162306]: 2025-12-13 03:55:40.616220606 +0000 UTC m=+0.152680798 container start 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:55:40 compute-0 podman[162306]: 2025-12-13 03:55:40.620288067 +0000 UTC m=+0.156748309 container attach 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 03:55:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:41 compute-0 beautiful_ramanujan[162322]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:55:41 compute-0 beautiful_ramanujan[162322]: --> All data devices are unavailable
Dec 13 03:55:41 compute-0 systemd[1]: libpod-0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1.scope: Deactivated successfully.
Dec 13 03:55:41 compute-0 podman[162306]: 2025-12-13 03:55:41.165801302 +0000 UTC m=+0.702261494 container died 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a1c6494261d8846b339af23a009d030b1e1ffb5aa16f7e657812aa6e54d1ce0-merged.mount: Deactivated successfully.
Dec 13 03:55:41 compute-0 podman[162306]: 2025-12-13 03:55:41.21068012 +0000 UTC m=+0.747140312 container remove 0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_ramanujan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:55:41 compute-0 systemd[1]: libpod-conmon-0f6a512f90b8a47ea7d59229d2d27647bd7d1f5e6c3cd297031df086d82b62a1.scope: Deactivated successfully.
Dec 13 03:55:41 compute-0 sudo[162226]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:41 compute-0 sudo[162355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:55:41 compute-0 sudo[162355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:41 compute-0 sudo[162355]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:41 compute-0 sudo[162380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:55:41 compute-0 sudo[162380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.698903378 +0000 UTC m=+0.038831143 container create 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 13 03:55:41 compute-0 systemd[1]: Started libpod-conmon-467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632.scope.
Dec 13 03:55:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.763786643 +0000 UTC m=+0.103714428 container init 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.769767517 +0000 UTC m=+0.109695282 container start 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.773963671 +0000 UTC m=+0.113891436 container attach 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:55:41 compute-0 funny_rhodes[162433]: 167 167
Dec 13 03:55:41 compute-0 systemd[1]: libpod-467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632.scope: Deactivated successfully.
Dec 13 03:55:41 compute-0 conmon[162433]: conmon 467bffcda533c6a7c19e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632.scope/container/memory.events
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.77645828 +0000 UTC m=+0.116386055 container died 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.682185011 +0000 UTC m=+0.022112796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-516b891d0d1834997e29b17e149f6778cc51dbcb3efc30d40383af2833d6480f-merged.mount: Deactivated successfully.
Dec 13 03:55:41 compute-0 podman[162417]: 2025-12-13 03:55:41.815475917 +0000 UTC m=+0.155403682 container remove 467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_rhodes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:55:41 compute-0 systemd[1]: libpod-conmon-467bffcda533c6a7c19e45dffa7b256704566af362f80ea7b2edf611289e2632.scope: Deactivated successfully.
Dec 13 03:55:41 compute-0 podman[162457]: 2025-12-13 03:55:41.983423622 +0000 UTC m=+0.047781238 container create 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 03:55:42 compute-0 systemd[1]: Started libpod-conmon-8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6.scope.
Dec 13 03:55:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9b87d8a28a4ce465236896373a955b847f41f12db8498cbdd92f2ee6b217ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9b87d8a28a4ce465236896373a955b847f41f12db8498cbdd92f2ee6b217ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:41.961558434 +0000 UTC m=+0.025916110 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9b87d8a28a4ce465236896373a955b847f41f12db8498cbdd92f2ee6b217ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd9b87d8a28a4ce465236896373a955b847f41f12db8498cbdd92f2ee6b217ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:42.070975507 +0000 UTC m=+0.135333173 container init 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:42.078142684 +0000 UTC m=+0.142500310 container start 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:42.082190024 +0000 UTC m=+0.146547670 container attach 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:55:42 compute-0 ceph-mon[75071]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:42 compute-0 infallible_raman[162474]: {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     "0": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "devices": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "/dev/loop3"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             ],
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_name": "ceph_lv0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_size": "21470642176",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "name": "ceph_lv0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "tags": {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_name": "ceph",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.crush_device_class": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.encrypted": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.objectstore": "bluestore",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_id": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.vdo": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.with_tpm": "0"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             },
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "vg_name": "ceph_vg0"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         }
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     ],
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     "1": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "devices": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "/dev/loop4"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             ],
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_name": "ceph_lv1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_size": "21470642176",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "name": "ceph_lv1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "tags": {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_name": "ceph",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.crush_device_class": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.encrypted": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.objectstore": "bluestore",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_id": "1",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.vdo": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.with_tpm": "0"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             },
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "vg_name": "ceph_vg1"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         }
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     ],
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     "2": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "devices": [
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "/dev/loop5"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             ],
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_name": "ceph_lv2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_size": "21470642176",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "name": "ceph_lv2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "tags": {
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.cluster_name": "ceph",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.crush_device_class": "",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.encrypted": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.objectstore": "bluestore",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osd_id": "2",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.vdo": "0",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:                 "ceph.with_tpm": "0"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             },
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "type": "block",
Dec 13 03:55:42 compute-0 infallible_raman[162474]:             "vg_name": "ceph_vg2"
Dec 13 03:55:42 compute-0 infallible_raman[162474]:         }
Dec 13 03:55:42 compute-0 infallible_raman[162474]:     ]
Dec 13 03:55:42 compute-0 infallible_raman[162474]: }
Dec 13 03:55:42 compute-0 systemd[1]: libpod-8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6.scope: Deactivated successfully.
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:42.393741029 +0000 UTC m=+0.458098645 container died 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd9b87d8a28a4ce465236896373a955b847f41f12db8498cbdd92f2ee6b217ac-merged.mount: Deactivated successfully.
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:55:42 compute-0 podman[162457]: 2025-12-13 03:55:42.446686048 +0000 UTC m=+0.511043654 container remove 8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_raman, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:42 compute-0 systemd[1]: libpod-conmon-8052fbffbef1f7455ae6f1456ed613075bc2287f9511e96a96a2661bbbba64a6.scope: Deactivated successfully.
Dec 13 03:55:42 compute-0 sudo[162380]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:42 compute-0 sudo[162497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:55:42 compute-0 sudo[162497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:42 compute-0 sudo[162497]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:42 compute-0 sudo[162522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:55:42 compute-0 sudo[162522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:42 compute-0 podman[162559]: 2025-12-13 03:55:42.92778411 +0000 UTC m=+0.042705679 container create c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 03:55:42 compute-0 systemd[1]: Started libpod-conmon-c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2.scope.
Dec 13 03:55:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:42.90875729 +0000 UTC m=+0.023678879 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:43.01402246 +0000 UTC m=+0.128944049 container init c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:43.021678879 +0000 UTC m=+0.136600448 container start c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:43.025433122 +0000 UTC m=+0.140354711 container attach c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:55:43 compute-0 sad_ramanujan[162576]: 167 167
Dec 13 03:55:43 compute-0 systemd[1]: libpod-c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2.scope: Deactivated successfully.
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:43.028073294 +0000 UTC m=+0.142994893 container died c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:55:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8604f7c7a13be8d90445bcf384d4a0e5445f29813774b6fa5f365f659b2cd0a5-merged.mount: Deactivated successfully.
Dec 13 03:55:43 compute-0 podman[162559]: 2025-12-13 03:55:43.069690593 +0000 UTC m=+0.184612172 container remove c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_ramanujan, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:55:43 compute-0 systemd[1]: libpod-conmon-c637d4181bc184c70b32c315128ec9cf11a14c8673cebc18cbc17f30410980a2.scope: Deactivated successfully.
Dec 13 03:55:43 compute-0 podman[162599]: 2025-12-13 03:55:43.248553317 +0000 UTC m=+0.059984413 container create 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:55:43 compute-0 systemd[1]: Started libpod-conmon-156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd.scope.
Dec 13 03:55:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3345654b54a7bc8bddbce78ed4369b20efa13b1a996cc63ed04d693953c5633/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3345654b54a7bc8bddbce78ed4369b20efa13b1a996cc63ed04d693953c5633/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3345654b54a7bc8bddbce78ed4369b20efa13b1a996cc63ed04d693953c5633/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3345654b54a7bc8bddbce78ed4369b20efa13b1a996cc63ed04d693953c5633/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:55:43 compute-0 podman[162599]: 2025-12-13 03:55:43.230203644 +0000 UTC m=+0.041634760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:55:43 compute-0 podman[162599]: 2025-12-13 03:55:43.338435376 +0000 UTC m=+0.149866472 container init 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:43 compute-0 podman[162599]: 2025-12-13 03:55:43.345236101 +0000 UTC m=+0.156667197 container start 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:43 compute-0 podman[162599]: 2025-12-13 03:55:43.348717518 +0000 UTC m=+0.160148614 container attach 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 03:55:44 compute-0 lvm[162692]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:55:44 compute-0 lvm[162695]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:55:44 compute-0 lvm[162692]: VG ceph_vg0 finished
Dec 13 03:55:44 compute-0 lvm[162695]: VG ceph_vg1 finished
Dec 13 03:55:44 compute-0 lvm[162697]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:55:44 compute-0 lvm[162697]: VG ceph_vg2 finished
Dec 13 03:55:44 compute-0 lvm[162699]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:55:44 compute-0 lvm[162699]: VG ceph_vg1 finished
Dec 13 03:55:44 compute-0 lvm[162698]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:55:44 compute-0 lvm[162698]: VG ceph_vg0 finished
Dec 13 03:55:44 compute-0 nostalgic_shamir[162616]: {}
Dec 13 03:55:44 compute-0 systemd[1]: libpod-156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd.scope: Deactivated successfully.
Dec 13 03:55:44 compute-0 systemd[1]: libpod-156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd.scope: Consumed 1.324s CPU time.
Dec 13 03:55:44 compute-0 podman[162599]: 2025-12-13 03:55:44.214160066 +0000 UTC m=+1.025591202 container died 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:55:44 compute-0 ceph-mon[75071]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3345654b54a7bc8bddbce78ed4369b20efa13b1a996cc63ed04d693953c5633-merged.mount: Deactivated successfully.
Dec 13 03:55:44 compute-0 podman[162599]: 2025-12-13 03:55:44.817631877 +0000 UTC m=+1.629062963 container remove 156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:55:44 compute-0 sudo[162522]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:55:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:44 compute-0 systemd[1]: libpod-conmon-156f09226f66ee6df759388bf8fbb7ddcc0014afac419cff8d111785a8d440dd.scope: Deactivated successfully.
Dec 13 03:55:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:55:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:44 compute-0 sudo[162712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:55:44 compute-0 sudo[162712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:55:44 compute-0 sudo[162712]: pam_unix(sudo:session): session closed for user root
Dec 13 03:55:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:55:45 compute-0 ceph-mon[75071]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:48 compute-0 ceph-mon[75071]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:55:48 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:55:48 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:55:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 03:55:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:50 compute-0 ceph-mon[75071]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 03:55:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:52 compute-0 ceph-mon[75071]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:55:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:54 compute-0 ceph-mon[75071]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:56 compute-0 ceph-mon[75071]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:58 compute-0 ceph-mon[75071]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:58 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec 13 03:55:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:55:58 compute-0 podman[162745]: 2025-12-13 03:55:58.986063546 +0000 UTC m=+0.125439393 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec 13 03:55:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:55:59 compute-0 kernel: SELinux:  Converting 2769 SID table entries...
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:55:59 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:56:00 compute-0 ceph-mon[75071]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 03:56:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 03:56:02 compute-0 ceph-mon[75071]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 03:56:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:04 compute-0 ceph-mon[75071]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:04 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec 13 03:56:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:04 compute-0 podman[162778]: 2025-12-13 03:56:04.918917899 +0000 UTC m=+0.057598847 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 03:56:06 compute-0 ceph-mon[75071]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:08 compute-0 ceph-mon[75071]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:09 compute-0 ceph-mon[75071]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:11 compute-0 ceph-mon[75071]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:14 compute-0 ceph-mon[75071]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:15 compute-0 ceph-mon[75071]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:17 compute-0 ceph-mon[75071]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:20 compute-0 ceph-mon[75071]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:22 compute-0 ceph-mon[75071]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:24 compute-0 ceph-mon[75071]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:26 compute-0 ceph-mon[75071]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:28 compute-0 ceph-mon[75071]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:29 compute-0 podman[173141]: 2025-12-13 03:56:29.954964074 +0000 UTC m=+0.098063880 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:30 compute-0 ceph-mon[75071]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:31 compute-0 ceph-mon[75071]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:33 compute-0 ceph-mon[75071]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:56:35.068 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:56:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:56:35.069 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:56:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:56:35.069 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:56:35 compute-0 podman[176878]: 2025-12-13 03:56:35.9275087 +0000 UTC m=+0.078939008 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 03:56:36 compute-0 ceph-mon[75071]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:38 compute-0 ceph-mon[75071]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:40 compute-0 ceph-mon[75071]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:56:40
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr']
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:56:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:56:42 compute-0 ceph-mon[75071]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:56:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:44 compute-0 ceph-mon[75071]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:45 compute-0 sudo[179653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:56:45 compute-0 sudo[179653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:45 compute-0 sudo[179653]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:45 compute-0 sudo[179678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 03:56:45 compute-0 sudo[179678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:45 compute-0 podman[179748]: 2025-12-13 03:56:45.591724383 +0000 UTC m=+0.060720973 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 03:56:45 compute-0 podman[179748]: 2025-12-13 03:56:45.708504841 +0000 UTC m=+0.177501411 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 03:56:46 compute-0 sudo[179678]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:56:46 compute-0 ceph-mon[75071]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:56:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:46 compute-0 sudo[179937]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:56:46 compute-0 sudo[179937]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:46 compute-0 sudo[179937]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:46 compute-0 sudo[179962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:56:46 compute-0 sudo[179962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:47 compute-0 sudo[179962]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:47 compute-0 sudo[180018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:56:47 compute-0 sudo[180018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:47 compute-0 sudo[180018]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:47 compute-0 sudo[180043]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:56:47 compute-0 sudo[180043]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:47 compute-0 ceph-mon[75071]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:56:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.531169747 +0000 UTC m=+0.046238020 container create d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:56:47 compute-0 systemd[1]: Started libpod-conmon-d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92.scope.
Dec 13 03:56:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.509391474 +0000 UTC m=+0.024459737 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.608161962 +0000 UTC m=+0.123230255 container init d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.616758955 +0000 UTC m=+0.131827228 container start d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.619970053 +0000 UTC m=+0.135038356 container attach d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:56:47 compute-0 intelligent_lederberg[180096]: 167 167
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.624864776 +0000 UTC m=+0.139933039 container died d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:47 compute-0 systemd[1]: libpod-d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92.scope: Deactivated successfully.
Dec 13 03:56:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-60399cccbf9f27a8af27ac973ec739371538bd6fb9ef1a3493bcd311efa35344-merged.mount: Deactivated successfully.
Dec 13 03:56:47 compute-0 podman[180080]: 2025-12-13 03:56:47.670889878 +0000 UTC m=+0.185958151 container remove d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 03:56:47 compute-0 systemd[1]: libpod-conmon-d4401dcfc9f812f9a640aa0be7e8fecfbdaf566a98636830d385e5d4450eea92.scope: Deactivated successfully.
Dec 13 03:56:47 compute-0 podman[180120]: 2025-12-13 03:56:47.823695836 +0000 UTC m=+0.039949917 container create 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 03:56:47 compute-0 systemd[1]: Started libpod-conmon-93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90.scope.
Dec 13 03:56:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:47 compute-0 podman[180120]: 2025-12-13 03:56:47.807997209 +0000 UTC m=+0.024251310 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:48 compute-0 podman[180120]: 2025-12-13 03:56:48.027318397 +0000 UTC m=+0.243572518 container init 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:56:48 compute-0 podman[180120]: 2025-12-13 03:56:48.036398934 +0000 UTC m=+0.252653005 container start 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:56:48 compute-0 podman[180120]: 2025-12-13 03:56:48.075326534 +0000 UTC m=+0.291580675 container attach 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:56:48 compute-0 peaceful_ishizaka[180137]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:56:48 compute-0 peaceful_ishizaka[180137]: --> All data devices are unavailable
Dec 13 03:56:48 compute-0 systemd[1]: libpod-93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90.scope: Deactivated successfully.
Dec 13 03:56:48 compute-0 podman[180120]: 2025-12-13 03:56:48.541469227 +0000 UTC m=+0.757723318 container died 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:56:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d065ab626cd1974b03e8dfc0a29aa6269e4e5826bc7975c1fed61bae2761e10-merged.mount: Deactivated successfully.
Dec 13 03:56:48 compute-0 podman[180120]: 2025-12-13 03:56:48.623777448 +0000 UTC m=+0.840031529 container remove 93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_ishizaka, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 03:56:48 compute-0 systemd[1]: libpod-conmon-93a396e4e7192b262d806de7ca0f78297d0a862f0c0998c70312585735d64c90.scope: Deactivated successfully.
Dec 13 03:56:48 compute-0 sudo[180043]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:48 compute-0 sudo[180171]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:56:48 compute-0 sudo[180171]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:48 compute-0 sudo[180171]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:48 compute-0 sudo[180196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:56:48 compute-0 sudo[180196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.054130197 +0000 UTC m=+0.040114802 container create 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:49 compute-0 systemd[1]: Started libpod-conmon-82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050.scope.
Dec 13 03:56:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.132985393 +0000 UTC m=+0.118970018 container init 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.038321328 +0000 UTC m=+0.024305953 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.138981276 +0000 UTC m=+0.124965881 container start 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.142860002 +0000 UTC m=+0.128844607 container attach 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:49 compute-0 festive_wescoff[180249]: 167 167
Dec 13 03:56:49 compute-0 systemd[1]: libpod-82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050.scope: Deactivated successfully.
Dec 13 03:56:49 compute-0 conmon[180249]: conmon 82d8f9117de3c5a208ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050.scope/container/memory.events
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.145989587 +0000 UTC m=+0.131974192 container died 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 03:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-805d95626f2ee8e39d151868dd73778cac7099f49b4d0df2ef52c6da1cfebf79-merged.mount: Deactivated successfully.
Dec 13 03:56:49 compute-0 podman[180233]: 2025-12-13 03:56:49.186454828 +0000 UTC m=+0.172439433 container remove 82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_wescoff, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:56:49 compute-0 systemd[1]: libpod-conmon-82d8f9117de3c5a208abe9059e6778a2dbe87f49dc3f6ce03a3dcaa5b154f050.scope: Deactivated successfully.
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.350604985 +0000 UTC m=+0.042654482 container create 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 03:56:49 compute-0 systemd[1]: Started libpod-conmon-78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce.scope.
Dec 13 03:56:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.331968907 +0000 UTC m=+0.024018404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b49b36c511a7cdccf6749648c3250d9006395c7599c154c8766de29724507bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b49b36c511a7cdccf6749648c3250d9006395c7599c154c8766de29724507bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b49b36c511a7cdccf6749648c3250d9006395c7599c154c8766de29724507bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b49b36c511a7cdccf6749648c3250d9006395c7599c154c8766de29724507bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.436590025 +0000 UTC m=+0.128639542 container init 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.445806435 +0000 UTC m=+0.137855922 container start 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.449408163 +0000 UTC m=+0.141457680 container attach 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:49 compute-0 modest_hertz[180288]: {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     "0": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "devices": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "/dev/loop3"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             ],
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_name": "ceph_lv0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_size": "21470642176",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "name": "ceph_lv0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "tags": {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_name": "ceph",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.crush_device_class": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.encrypted": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.objectstore": "bluestore",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_id": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.vdo": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.with_tpm": "0"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             },
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "vg_name": "ceph_vg0"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         }
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     ],
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     "1": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "devices": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "/dev/loop4"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             ],
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_name": "ceph_lv1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_size": "21470642176",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "name": "ceph_lv1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "tags": {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_name": "ceph",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.crush_device_class": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.encrypted": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.objectstore": "bluestore",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_id": "1",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.vdo": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.with_tpm": "0"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             },
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "vg_name": "ceph_vg1"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         }
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     ],
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     "2": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "devices": [
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "/dev/loop5"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             ],
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_name": "ceph_lv2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_size": "21470642176",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "name": "ceph_lv2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "tags": {
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.cluster_name": "ceph",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.crush_device_class": "",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.encrypted": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.objectstore": "bluestore",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osd_id": "2",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.vdo": "0",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:                 "ceph.with_tpm": "0"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             },
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "type": "block",
Dec 13 03:56:49 compute-0 modest_hertz[180288]:             "vg_name": "ceph_vg2"
Dec 13 03:56:49 compute-0 modest_hertz[180288]:         }
Dec 13 03:56:49 compute-0 modest_hertz[180288]:     ]
Dec 13 03:56:49 compute-0 modest_hertz[180288]: }
Dec 13 03:56:49 compute-0 systemd[1]: libpod-78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce.scope: Deactivated successfully.
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.765442002 +0000 UTC m=+0.457491499 container died 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 03:56:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b49b36c511a7cdccf6749648c3250d9006395c7599c154c8766de29724507bb-merged.mount: Deactivated successfully.
Dec 13 03:56:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:49 compute-0 podman[180271]: 2025-12-13 03:56:49.814737684 +0000 UTC m=+0.506787181 container remove 78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_hertz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 03:56:49 compute-0 systemd[1]: libpod-conmon-78ee2729c808beed42fe058995daaca0f418cefb229050121cc59558109f3cce.scope: Deactivated successfully.
Dec 13 03:56:49 compute-0 sudo[180196]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:49 compute-0 sudo[180310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:56:49 compute-0 sudo[180310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:49 compute-0 sudo[180310]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:49 compute-0 ceph-mon[75071]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:50 compute-0 sudo[180335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:56:50 compute-0 sudo[180335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.311220094 +0000 UTC m=+0.039375073 container create 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:50 compute-0 systemd[1]: Started libpod-conmon-4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1.scope.
Dec 13 03:56:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.377721143 +0000 UTC m=+0.105876152 container init 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.385294119 +0000 UTC m=+0.113449098 container start 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.389121244 +0000 UTC m=+0.117276223 container attach 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:50 compute-0 inspiring_murdock[180388]: 167 167
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.294805487 +0000 UTC m=+0.022960476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:50 compute-0 systemd[1]: libpod-4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1.scope: Deactivated successfully.
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.390963813 +0000 UTC m=+0.119118792 container died 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:56:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-a47ae2ba30cad1034f581183f49e35224f91560f54a13a0d2b004c4504031ef6-merged.mount: Deactivated successfully.
Dec 13 03:56:50 compute-0 podman[180372]: 2025-12-13 03:56:50.438647141 +0000 UTC m=+0.166802120 container remove 4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:50 compute-0 systemd[1]: libpod-conmon-4587d6b7403c80b8c9204bd8b7953921e90777ed44da48248c81155203f961f1.scope: Deactivated successfully.
Dec 13 03:56:50 compute-0 podman[180412]: 2025-12-13 03:56:50.625179457 +0000 UTC m=+0.042055156 container create 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:56:50 compute-0 systemd[1]: Started libpod-conmon-14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222.scope.
Dec 13 03:56:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3493bd9fdda4e00bbf18f6b3b1ea6b673ba53d872e049f1d7e3a3a81a6777e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3493bd9fdda4e00bbf18f6b3b1ea6b673ba53d872e049f1d7e3a3a81a6777e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3493bd9fdda4e00bbf18f6b3b1ea6b673ba53d872e049f1d7e3a3a81a6777e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3493bd9fdda4e00bbf18f6b3b1ea6b673ba53d872e049f1d7e3a3a81a6777e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:56:50 compute-0 podman[180412]: 2025-12-13 03:56:50.70174517 +0000 UTC m=+0.118620889 container init 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:56:50 compute-0 podman[180412]: 2025-12-13 03:56:50.605207443 +0000 UTC m=+0.022083162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:56:50 compute-0 podman[180412]: 2025-12-13 03:56:50.708222257 +0000 UTC m=+0.125097956 container start 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:56:50 compute-0 podman[180412]: 2025-12-13 03:56:50.711428814 +0000 UTC m=+0.128304513 container attach 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 03:56:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:51 compute-0 lvm[180507]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:56:51 compute-0 lvm[180508]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:56:51 compute-0 lvm[180507]: VG ceph_vg0 finished
Dec 13 03:56:51 compute-0 lvm[180508]: VG ceph_vg1 finished
Dec 13 03:56:51 compute-0 lvm[180510]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:56:51 compute-0 lvm[180510]: VG ceph_vg2 finished
Dec 13 03:56:51 compute-0 naughty_keller[180429]: {}
Dec 13 03:56:51 compute-0 systemd[1]: libpod-14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222.scope: Deactivated successfully.
Dec 13 03:56:51 compute-0 systemd[1]: libpod-14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222.scope: Consumed 1.440s CPU time.
Dec 13 03:56:51 compute-0 podman[180412]: 2025-12-13 03:56:51.568506425 +0000 UTC m=+0.985382134 container died 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 03:56:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-3493bd9fdda4e00bbf18f6b3b1ea6b673ba53d872e049f1d7e3a3a81a6777e5e-merged.mount: Deactivated successfully.
Dec 13 03:56:51 compute-0 podman[180412]: 2025-12-13 03:56:51.615901375 +0000 UTC m=+1.032777074 container remove 14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 03:56:51 compute-0 systemd[1]: libpod-conmon-14b321d61a867af488089a9e48ddc30ac7df329faadb1f03958140a677b52222.scope: Deactivated successfully.
Dec 13 03:56:51 compute-0 sudo[180335]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:56:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:56:51 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:51 compute-0 sudo[180523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:56:51 compute-0 sudo[180523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:56:51 compute-0 sudo[180523]: pam_unix(sudo:session): session closed for user root
Dec 13 03:56:52 compute-0 ceph-mon[75071]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:52 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:56:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:54 compute-0 ceph-mon[75071]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:56:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:56 compute-0 ceph-mon[75071]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:56 compute-0 kernel: SELinux:  Converting 2770 SID table entries...
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability open_perms=1
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability always_check_network=0
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec 13 03:56:56 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec 13 03:56:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:57 compute-0 groupadd[180561]: group added to /etc/group: name=dnsmasq, GID=991
Dec 13 03:56:57 compute-0 groupadd[180561]: group added to /etc/gshadow: name=dnsmasq
Dec 13 03:56:57 compute-0 groupadd[180561]: new group: name=dnsmasq, GID=991
Dec 13 03:56:57 compute-0 useradd[180568]: new user: name=dnsmasq, UID=991, GID=991, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Dec 13 03:56:58 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:56:58 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec 13 03:56:58 compute-0 dbus-broker-launch[748]: Noticed file-system modification, trigger reload.
Dec 13 03:56:58 compute-0 ceph-mon[75071]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:56:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:00 compute-0 ceph-mon[75071]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:00 compute-0 groupadd[180582]: group added to /etc/group: name=clevis, GID=990
Dec 13 03:57:00 compute-0 groupadd[180582]: group added to /etc/gshadow: name=clevis
Dec 13 03:57:00 compute-0 groupadd[180582]: new group: name=clevis, GID=990
Dec 13 03:57:00 compute-0 useradd[180601]: new user: name=clevis, UID=990, GID=990, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Dec 13 03:57:00 compute-0 usermod[180624]: add 'clevis' to group 'tss'
Dec 13 03:57:00 compute-0 usermod[180624]: add 'clevis' to shadow group 'tss'
Dec 13 03:57:00 compute-0 podman[180581]: 2025-12-13 03:57:00.725284697 +0000 UTC m=+0.099651252 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 03:57:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:01 compute-0 ceph-mon[75071]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:03 compute-0 polkitd[43382]: Reloading rules
Dec 13 03:57:03 compute-0 polkitd[43382]: Collecting garbage unconditionally...
Dec 13 03:57:03 compute-0 polkitd[43382]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 03:57:03 compute-0 polkitd[43382]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 03:57:03 compute-0 polkitd[43382]: Finished loading, compiling and executing 3 rules
Dec 13 03:57:03 compute-0 polkitd[43382]: Reloading rules
Dec 13 03:57:03 compute-0 polkitd[43382]: Collecting garbage unconditionally...
Dec 13 03:57:03 compute-0 polkitd[43382]: Loading rules from directory /etc/polkit-1/rules.d
Dec 13 03:57:03 compute-0 polkitd[43382]: Loading rules from directory /usr/share/polkit-1/rules.d
Dec 13 03:57:03 compute-0 polkitd[43382]: Finished loading, compiling and executing 3 rules
Dec 13 03:57:04 compute-0 ceph-mon[75071]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:06 compute-0 ceph-mon[75071]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:06 compute-0 podman[180813]: 2025-12-13 03:57:06.872146318 +0000 UTC m=+0.056119718 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 03:57:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:08 compute-0 ceph-mon[75071]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:08 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Dec 13 03:57:08 compute-0 sshd[1005]: Received signal 15; terminating.
Dec 13 03:57:08 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Dec 13 03:57:08 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Dec 13 03:57:08 compute-0 systemd[1]: sshd.service: Consumed 2.218s CPU time, read 32.0K from disk, written 0B to disk.
Dec 13 03:57:08 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Dec 13 03:57:08 compute-0 systemd[1]: Stopping sshd-keygen.target...
Dec 13 03:57:08 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:57:08 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:57:08 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec 13 03:57:08 compute-0 systemd[1]: Reached target sshd-keygen.target.
Dec 13 03:57:08 compute-0 systemd[1]: Starting OpenSSH server daemon...
Dec 13 03:57:08 compute-0 sshd[181448]: Server listening on 0.0.0.0 port 22.
Dec 13 03:57:08 compute-0 sshd[181448]: Server listening on :: port 22.
Dec 13 03:57:08 compute-0 systemd[1]: Started OpenSSH server daemon.
Dec 13 03:57:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:10 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 03:57:10 compute-0 ceph-mon[75071]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:10 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 03:57:10 compute-0 systemd[1]: Reloading.
Dec 13 03:57:10 compute-0 systemd-rc-local-generator[181702]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:10 compute-0 systemd-sysv-generator[181708]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:10 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 03:57:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:12 compute-0 ceph-mon[75071]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:13 compute-0 sudo[161880]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:13 compute-0 ceph-mon[75071]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:14 compute-0 sudo[186309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofydamkpwkojahlwmwpklwcvdqywsgyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598234.0480084-336-121030382716610/AnsiballZ_systemd.py'
Dec 13 03:57:14 compute-0 sudo[186309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:14 compute-0 python3.9[186338]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:57:15 compute-0 systemd[1]: Reloading.
Dec 13 03:57:15 compute-0 systemd-rc-local-generator[186758]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:15 compute-0 systemd-sysv-generator[186761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:15 compute-0 sudo[186309]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:15 compute-0 sudo[187591]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woxratamnrhdpptmbuvjikbukyyiyoln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598235.5431988-336-137473534171860/AnsiballZ_systemd.py'
Dec 13 03:57:15 compute-0 sudo[187591]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:16 compute-0 ceph-mon[75071]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:16 compute-0 python3.9[187604]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:57:16 compute-0 systemd[1]: Reloading.
Dec 13 03:57:16 compute-0 systemd-rc-local-generator[188070]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:16 compute-0 systemd-sysv-generator[188075]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:16 compute-0 sudo[187591]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:16 compute-0 sudo[188797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uncxayyingxyhegmdhgktyotdzuvhswr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598236.7093666-336-250582853161793/AnsiballZ_systemd.py'
Dec 13 03:57:16 compute-0 sudo[188797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:17 compute-0 python3.9[188815]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:57:17 compute-0 systemd[1]: Reloading.
Dec 13 03:57:17 compute-0 systemd-rc-local-generator[189280]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:17 compute-0 systemd-sysv-generator[189290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:17 compute-0 sudo[188797]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:18 compute-0 ceph-mon[75071]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:18 compute-0 sudo[190044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meqxlnjivnhctkbqefhgtaqnpfrxfvbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598237.8630831-336-191089365817612/AnsiballZ_systemd.py'
Dec 13 03:57:18 compute-0 sudo[190044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:18 compute-0 python3.9[190062]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:57:18 compute-0 systemd[1]: Reloading.
Dec 13 03:57:18 compute-0 systemd-sysv-generator[190512]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:18 compute-0 systemd-rc-local-generator[190504]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:18 compute-0 sudo[190044]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:19 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 03:57:19 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 03:57:19 compute-0 systemd[1]: man-db-cache-update.service: Consumed 11.140s CPU time.
Dec 13 03:57:19 compute-0 systemd[1]: run-r5f8cbe0931e948bd81b30524e345d631.service: Deactivated successfully.
Dec 13 03:57:19 compute-0 sudo[190996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govcrbrxyozagqpfvfrstemdcmmceadc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598239.0445704-365-268314905136066/AnsiballZ_systemd.py'
Dec 13 03:57:19 compute-0 sudo[190996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:19 compute-0 python3.9[190998]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:19 compute-0 systemd[1]: Reloading.
Dec 13 03:57:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:19 compute-0 systemd-rc-local-generator[191028]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:19 compute-0 systemd-sysv-generator[191033]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:20 compute-0 ceph-mon[75071]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:20 compute-0 sudo[190996]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:20 compute-0 sudo[191186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsuwzmavpywhaovkljpltdijultxsxgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598240.2599897-365-99574710722555/AnsiballZ_systemd.py'
Dec 13 03:57:20 compute-0 sudo[191186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:20 compute-0 python3.9[191188]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:20 compute-0 systemd[1]: Reloading.
Dec 13 03:57:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:21 compute-0 systemd-sysv-generator[191224]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:21 compute-0 systemd-rc-local-generator[191220]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:21 compute-0 sudo[191186]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:21 compute-0 sudo[191377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjvpzxgbppqlboawmaeodsewbjrlrxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598241.4257894-365-160761737597085/AnsiballZ_systemd.py'
Dec 13 03:57:21 compute-0 sudo[191377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:22 compute-0 python3.9[191379]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:22 compute-0 ceph-mon[75071]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:22 compute-0 systemd[1]: Reloading.
Dec 13 03:57:22 compute-0 systemd-rc-local-generator[191409]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:22 compute-0 systemd-sysv-generator[191414]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:22 compute-0 sudo[191377]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:22 compute-0 sudo[191567]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyzsydegwtsbfdidetgfqxeuovoqgqdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598242.5347462-365-216903353241296/AnsiballZ_systemd.py'
Dec 13 03:57:22 compute-0 sudo[191567]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:23 compute-0 python3.9[191569]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:23 compute-0 sudo[191567]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:23 compute-0 sudo[191722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbyeisnqzbusmoprkfjwsycnfbqnqiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598243.2888489-365-254723035695725/AnsiballZ_systemd.py'
Dec 13 03:57:23 compute-0 sudo[191722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:23 compute-0 python3.9[191724]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:23 compute-0 systemd[1]: Reloading.
Dec 13 03:57:23 compute-0 systemd-rc-local-generator[191757]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:23 compute-0 systemd-sysv-generator[191760]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:24 compute-0 ceph-mon[75071]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:24 compute-0 sudo[191722]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:24 compute-0 sudo[191913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnyhqzyqxrecmntbgdvzuhaokvvrangn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598244.3647804-401-149876879326214/AnsiballZ_systemd.py'
Dec 13 03:57:24 compute-0 sudo[191913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:24 compute-0 python3.9[191915]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec 13 03:57:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:24 compute-0 systemd[1]: Reloading.
Dec 13 03:57:25 compute-0 systemd-rc-local-generator[191945]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:57:25 compute-0 systemd-sysv-generator[191950]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:57:25 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Dec 13 03:57:25 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec 13 03:57:25 compute-0 sudo[191913]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:25 compute-0 sudo[192106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwwldnwrncuhvyvvvfrayeneuxtiryee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598245.5532253-409-196735323499940/AnsiballZ_systemd.py'
Dec 13 03:57:25 compute-0 sudo[192106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:26 compute-0 ceph-mon[75071]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:26 compute-0 python3.9[192108]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:26 compute-0 sudo[192106]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:26 compute-0 sudo[192261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgzfpzmpzhzkylmqcgkhqgaskhldhqxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598246.3647466-409-78321276965907/AnsiballZ_systemd.py'
Dec 13 03:57:26 compute-0 sudo[192261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:26 compute-0 python3.9[192263]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:27 compute-0 sudo[192261]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:27 compute-0 sudo[192416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhhyuthdsvhinevdodblveezuzfqoeqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598247.1379533-409-212919306003567/AnsiballZ_systemd.py'
Dec 13 03:57:27 compute-0 sudo[192416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:27 compute-0 python3.9[192418]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:27 compute-0 sudo[192416]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:28 compute-0 ceph-mon[75071]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:28 compute-0 sudo[192571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yktwacgjlxnbsnammhacspepxuyqrzgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598247.9339554-409-7929133147867/AnsiballZ_systemd.py'
Dec 13 03:57:28 compute-0 sudo[192571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:28 compute-0 python3.9[192573]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:29 compute-0 sudo[192571]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:29 compute-0 sudo[192726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uetcsklrtqpsexmicvbkhbasljztjgcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598249.6977923-409-195153245716499/AnsiballZ_systemd.py'
Dec 13 03:57:29 compute-0 sudo[192726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:30 compute-0 ceph-mon[75071]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:30 compute-0 python3.9[192728]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:30 compute-0 sudo[192726]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:30 compute-0 sudo[192881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sapkwpiltmwtxpawextdzpdbefuhoaac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598250.4642053-409-272796429048980/AnsiballZ_systemd.py'
Dec 13 03:57:30 compute-0 sudo[192881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:30 compute-0 podman[192884]: 2025-12-13 03:57:30.968808657 +0000 UTC m=+0.109679058 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:31 compute-0 python3.9[192883]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:31 compute-0 sudo[192881]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:31 compute-0 sudo[193062]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koankftjlihuymiahujndexdvismxgta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598251.2404037-409-40358587409974/AnsiballZ_systemd.py'
Dec 13 03:57:31 compute-0 sudo[193062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:31 compute-0 python3.9[193064]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:31 compute-0 sudo[193062]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:32 compute-0 ceph-mon[75071]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:32 compute-0 sudo[193217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txjfpxnuzlsesjorzmlmqampqmifncen ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598252.0528514-409-255122000361790/AnsiballZ_systemd.py'
Dec 13 03:57:32 compute-0 sudo[193217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:32 compute-0 python3.9[193219]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:32 compute-0 sudo[193217]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:33 compute-0 sudo[193372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-figttwlnxgmrfhxyownzacupopowsqfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598252.8256278-409-243744718928122/AnsiballZ_systemd.py'
Dec 13 03:57:33 compute-0 sudo[193372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:33 compute-0 python3.9[193374]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:33 compute-0 sudo[193372]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:33 compute-0 sudo[193527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azfiwufdtxhpognkgavhzkuibraxjnyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598253.6315606-409-144621525293358/AnsiballZ_systemd.py'
Dec 13 03:57:33 compute-0 sudo[193527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:34 compute-0 ceph-mon[75071]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:34 compute-0 python3.9[193529]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:34 compute-0 sudo[193527]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:34 compute-0 sudo[193682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rngoadokvktcwuiebxpwulhwoxxiqvws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598254.4553065-409-246584723729519/AnsiballZ_systemd.py'
Dec 13 03:57:34 compute-0 sudo[193682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:35 compute-0 python3.9[193684]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:57:35.069 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:57:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:57:35.070 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:57:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:57:35.070 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:57:35 compute-0 sudo[193682]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:35 compute-0 sudo[193837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cacvybopsqrskyqeffnsmdknbcghqfbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598255.2613926-409-191684253268517/AnsiballZ_systemd.py'
Dec 13 03:57:35 compute-0 sudo[193837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:35 compute-0 python3.9[193839]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:35 compute-0 sudo[193837]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:36 compute-0 ceph-mon[75071]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:36 compute-0 sudo[193992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfvsgvnobcisgfcmcnunhvpinvlxowk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598256.0775194-409-87230004869171/AnsiballZ_systemd.py'
Dec 13 03:57:36 compute-0 sudo[193992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:36 compute-0 python3.9[193994]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:36 compute-0 sudo[193992]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:37 compute-0 sudo[194158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcxftxydaibmuxgvzcjdfvfgewwsceg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598256.937541-409-153150709446513/AnsiballZ_systemd.py'
Dec 13 03:57:37 compute-0 sudo[194158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:37 compute-0 podman[194121]: 2025-12-13 03:57:37.277095951 +0000 UTC m=+0.067977950 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:57:37 compute-0 python3.9[194165]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec 13 03:57:37 compute-0 sudo[194158]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:38 compute-0 ceph-mon[75071]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:38 compute-0 sudo[194321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qusnfwkidlelalqqxbxvlaucvgeqczda ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598257.9995203-511-175550301676984/AnsiballZ_file.py'
Dec 13 03:57:38 compute-0 sudo[194321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:38 compute-0 python3.9[194323]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:38 compute-0 sudo[194321]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:38 compute-0 sudo[194473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ephtfaloyhqxysbitezjtqpicakrlkln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598258.6337986-511-141488391212622/AnsiballZ_file.py'
Dec 13 03:57:38 compute-0 sudo[194473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:39 compute-0 python3.9[194475]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:39 compute-0 sudo[194473]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:39 compute-0 sudo[194625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjrfykvbdjzzplucvyfdfrnquzikpsoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598259.2307272-511-85800155329040/AnsiballZ_file.py'
Dec 13 03:57:39 compute-0 sudo[194625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:39 compute-0 ceph-mon[75071]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:39 compute-0 python3.9[194627]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:39 compute-0 sudo[194625]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:40 compute-0 sudo[194777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygztijbarlxqrywljfrzzaweqapnntuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598259.8356953-511-128987962962432/AnsiballZ_file.py'
Dec 13 03:57:40 compute-0 sudo[194777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:40 compute-0 python3.9[194779]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:40 compute-0 sudo[194777]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:57:40
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'vms', 'default.rgw.log', '.rgw.root', 'volumes']
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:57:40 compute-0 sudo[194929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoeycghdzkvqxvddmqpllyehsvpzczun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598260.4673004-511-219733492534944/AnsiballZ_file.py'
Dec 13 03:57:40 compute-0 sudo[194929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:40 compute-0 python3.9[194931]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:40 compute-0 sudo[194929]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:41 compute-0 sudo[195081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-squzrbflnxfnibrixmcjwjdkonwezqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598261.0700014-511-168294628620476/AnsiballZ_file.py'
Dec 13 03:57:41 compute-0 sudo[195081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:41 compute-0 python3.9[195083]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:57:41 compute-0 sudo[195081]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:42 compute-0 ceph-mon[75071]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:42 compute-0 sudo[195233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvwpouavbqeqbornvedikrnklgtpbmvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598261.744181-554-125269978680887/AnsiballZ_stat.py'
Dec 13 03:57:42 compute-0 sudo[195233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:57:42 compute-0 python3.9[195235]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:42 compute-0 sudo[195233]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:57:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:42 compute-0 sudo[195358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxkxjcunptnhngrtuhrfnmrdkxbtrizl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598261.744181-554-125269978680887/AnsiballZ_copy.py'
Dec 13 03:57:42 compute-0 sudo[195358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:43 compute-0 python3.9[195360]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598261.744181-554-125269978680887/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:43 compute-0 sudo[195358]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:43 compute-0 sudo[195510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbwtlazgnymoegcziaxbqkwkprtbdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598263.3262262-554-90564403340364/AnsiballZ_stat.py'
Dec 13 03:57:43 compute-0 sudo[195510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:43 compute-0 python3.9[195512]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:43 compute-0 sudo[195510]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:44 compute-0 ceph-mon[75071]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:44 compute-0 sudo[195635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqzmudkhewvbpppbzwoznfxeyfnzpqut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598263.3262262-554-90564403340364/AnsiballZ_copy.py'
Dec 13 03:57:44 compute-0 sudo[195635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:44 compute-0 python3.9[195637]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598263.3262262-554-90564403340364/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:44 compute-0 sudo[195635]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:44 compute-0 sudo[195787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-deiamacrhjanffadfsyvktxfngdqoriy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598264.5479295-554-176993008532094/AnsiballZ_stat.py'
Dec 13 03:57:44 compute-0 sudo[195787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:45 compute-0 python3.9[195789]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:45 compute-0 sudo[195787]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:45 compute-0 sudo[195912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nprktrzrowlyazuufrbzvlydcsvqbugj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598264.5479295-554-176993008532094/AnsiballZ_copy.py'
Dec 13 03:57:45 compute-0 sudo[195912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:45 compute-0 python3.9[195914]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598264.5479295-554-176993008532094/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:45 compute-0 sudo[195912]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:46 compute-0 ceph-mon[75071]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:46 compute-0 sudo[196064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhvfvdeupgbrhwroiczjmrdpoadoohvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598265.8507304-554-190632189962004/AnsiballZ_stat.py'
Dec 13 03:57:46 compute-0 sudo[196064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:46 compute-0 python3.9[196066]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:46 compute-0 sudo[196064]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:46 compute-0 sudo[196189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtsxilpalfdattqtdxgmbmajkvnbzvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598265.8507304-554-190632189962004/AnsiballZ_copy.py'
Dec 13 03:57:46 compute-0 sudo[196189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:47 compute-0 python3.9[196191]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598265.8507304-554-190632189962004/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:47 compute-0 sudo[196189]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:47 compute-0 sudo[196341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcuuoznfppousqbxtmthlpvbgzusshva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598267.199002-554-239433731114097/AnsiballZ_stat.py'
Dec 13 03:57:47 compute-0 sudo[196341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:47 compute-0 python3.9[196343]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:47 compute-0 sudo[196341]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:48 compute-0 ceph-mon[75071]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:48 compute-0 sudo[196466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsrfzgbpxdnixwmpvskbbapjqnhmetre ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598267.199002-554-239433731114097/AnsiballZ_copy.py'
Dec 13 03:57:48 compute-0 sudo[196466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:48 compute-0 python3.9[196468]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598267.199002-554-239433731114097/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:48 compute-0 sudo[196466]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:48 compute-0 sudo[196618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xigouhnxathcmdfqgwqpwlltmutdnfoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598268.5209756-554-90927321852630/AnsiballZ_stat.py'
Dec 13 03:57:48 compute-0 sudo[196618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:49 compute-0 python3.9[196620]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:49 compute-0 sudo[196618]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:49 compute-0 sudo[196743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjvyklbantspitdwxsuctyiufrgpeupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598268.5209756-554-90927321852630/AnsiballZ_copy.py'
Dec 13 03:57:49 compute-0 sudo[196743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:49 compute-0 python3.9[196745]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598268.5209756-554-90927321852630/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:49 compute-0 sudo[196743]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:50 compute-0 ceph-mon[75071]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.069088) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270069156, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2040, "num_deletes": 251, "total_data_size": 3607794, "memory_usage": 3647376, "flush_reason": "Manual Compaction"}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270100500, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3531617, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9783, "largest_seqno": 11822, "table_properties": {"data_size": 3522278, "index_size": 5961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17783, "raw_average_key_size": 19, "raw_value_size": 3503843, "raw_average_value_size": 3833, "num_data_blocks": 271, "num_entries": 914, "num_filter_entries": 914, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598032, "oldest_key_time": 1765598032, "file_creation_time": 1765598270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 31478 microseconds, and 9499 cpu microseconds.
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.100567) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3531617 bytes OK
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.100592) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.102391) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.102405) EVENT_LOG_v1 {"time_micros": 1765598270102401, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.102422) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3599287, prev total WAL file size 3599287, number of live WAL files 2.
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.103387) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3448KB)], [26(6318KB)]
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270103439, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 10002145, "oldest_snapshot_seqno": -1}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3747 keys, 8358448 bytes, temperature: kUnknown
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270173993, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8358448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8329388, "index_size": 18582, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9413, "raw_key_size": 90041, "raw_average_key_size": 24, "raw_value_size": 8257787, "raw_average_value_size": 2203, "num_data_blocks": 805, "num_entries": 3747, "num_filter_entries": 3747, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598270, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.174307) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8358448 bytes
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.175903) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.4 rd, 118.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 6.2 +0.0 blob) out(8.0 +0.0 blob), read-write-amplify(5.2) write-amplify(2.4) OK, records in: 4261, records dropped: 514 output_compression: NoCompression
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.175926) EVENT_LOG_v1 {"time_micros": 1765598270175914, "job": 10, "event": "compaction_finished", "compaction_time_micros": 70720, "compaction_time_cpu_micros": 22562, "output_level": 6, "num_output_files": 1, "total_output_size": 8358448, "num_input_records": 4261, "num_output_records": 3747, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270176912, "job": 10, "event": "table_file_deletion", "file_number": 28}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598270178391, "job": 10, "event": "table_file_deletion", "file_number": 26}
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.103274) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.178470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.178476) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.178478) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.178480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-03:57:50.178482) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 03:57:50 compute-0 sudo[196895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgzdiidzuugnguwlgmvomieogoqouygr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598269.827283-554-77046649662495/AnsiballZ_stat.py'
Dec 13 03:57:50 compute-0 sudo[196895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:50 compute-0 python3.9[196897]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:50 compute-0 sudo[196895]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:50 compute-0 sudo[197018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kocfenkifkkabspvlygoispakdfrvmye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598269.827283-554-77046649662495/AnsiballZ_copy.py'
Dec 13 03:57:50 compute-0 sudo[197018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:50 compute-0 python3.9[197020]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598269.827283-554-77046649662495/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:51 compute-0 sudo[197018]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:51 compute-0 sudo[197170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krrkbvudvlusqdoapohdqlvzzlyywgxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598271.1273894-554-103143992387275/AnsiballZ_stat.py'
Dec 13 03:57:51 compute-0 sudo[197170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:51 compute-0 python3.9[197172]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:57:51 compute-0 sudo[197170]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:51 compute-0 sudo[197240]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:57:51 compute-0 sudo[197240]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:51 compute-0 sudo[197240]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:51 compute-0 sudo[197283]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:57:51 compute-0 sudo[197283]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:51 compute-0 sudo[197345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ciwqriokoedxyekmuygijrtkxqttkqjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598271.1273894-554-103143992387275/AnsiballZ_copy.py'
Dec 13 03:57:51 compute-0 sudo[197345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:52 compute-0 python3.9[197347]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765598271.1273894-554-103143992387275/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:57:52 compute-0 sudo[197345]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:57:52 compute-0 sudo[197283]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:57:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:57:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:52 compute-0 sudo[197481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:57:52 compute-0 sudo[197481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:52 compute-0 sudo[197481]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:52 compute-0 sudo[197573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqbubfabafjlqrprnczfkvheshvhipak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598272.3579195-667-56433263167488/AnsiballZ_command.py'
Dec 13 03:57:52 compute-0 sudo[197573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:52 compute-0 sudo[197535]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:57:52 compute-0 sudo[197535]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:52 compute-0 python3.9[197578]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec 13 03:57:52 compute-0 sudo[197573]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:52 compute-0 podman[197595]: 2025-12-13 03:57:52.892749879 +0000 UTC m=+0.040290361 container create 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:57:52 compute-0 systemd[1]: Started libpod-conmon-4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332.scope.
Dec 13 03:57:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:52 compute-0 podman[197595]: 2025-12-13 03:57:52.875127462 +0000 UTC m=+0.022667964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:52 compute-0 podman[197595]: 2025-12-13 03:57:52.988706015 +0000 UTC m=+0.136246507 container init 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 03:57:52 compute-0 podman[197595]: 2025-12-13 03:57:52.996511526 +0000 UTC m=+0.144051998 container start 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 03:57:53 compute-0 podman[197595]: 2025-12-13 03:57:52.999996401 +0000 UTC m=+0.147536913 container attach 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:57:53 compute-0 jovial_feistel[197635]: 167 167
Dec 13 03:57:53 compute-0 systemd[1]: libpod-4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332.scope: Deactivated successfully.
Dec 13 03:57:53 compute-0 podman[197595]: 2025-12-13 03:57:53.003387382 +0000 UTC m=+0.150927874 container died 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-89b6818b133e3fc7c966bb8337bb0ae280be867182d44bc104f2df320a12682f-merged.mount: Deactivated successfully.
Dec 13 03:57:53 compute-0 podman[197595]: 2025-12-13 03:57:53.060470036 +0000 UTC m=+0.208010518 container remove 4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 03:57:53 compute-0 systemd[1]: libpod-conmon-4e7db5a6d5e00afa03a8f35ac9adf5df325388f2194ab005b3277ec2854f2332.scope: Deactivated successfully.
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:57:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.21475065 +0000 UTC m=+0.040055655 container create eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:53 compute-0 systemd[1]: Started libpod-conmon-eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5.scope.
Dec 13 03:57:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.198561313 +0000 UTC m=+0.023866338 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.298253819 +0000 UTC m=+0.123558854 container init eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.306835561 +0000 UTC m=+0.132140566 container start eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.31085811 +0000 UTC m=+0.136163135 container attach eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:57:53 compute-0 sudo[197804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cplxwrtpaauqrngxenmswacnrixsbhaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598273.022225-676-86531147336861/AnsiballZ_file.py'
Dec 13 03:57:53 compute-0 sudo[197804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:53 compute-0 python3.9[197807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:53 compute-0 sudo[197804]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:53 compute-0 pensive_brown[197774]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:57:53 compute-0 pensive_brown[197774]: --> All data devices are unavailable
Dec 13 03:57:53 compute-0 systemd[1]: libpod-eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5.scope: Deactivated successfully.
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.821611247 +0000 UTC m=+0.646916262 container died eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:57:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-01d7c2df6b9844e1baaed1407128de5db5fa666182c76a143d1324960cb1f843-merged.mount: Deactivated successfully.
Dec 13 03:57:53 compute-0 podman[197734]: 2025-12-13 03:57:53.865609858 +0000 UTC m=+0.690914863 container remove eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_brown, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:57:53 compute-0 systemd[1]: libpod-conmon-eb8f7dc6a3fe65981dc775ef530a211df1ac53985eb3fe08a38efb6b83b649b5.scope: Deactivated successfully.
Dec 13 03:57:53 compute-0 sudo[197535]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:53 compute-0 sudo[197984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdmqzohzcfrvehluijsygngtmcbzvwby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598273.6712112-676-264673953031603/AnsiballZ_file.py'
Dec 13 03:57:53 compute-0 sudo[197984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:53 compute-0 sudo[197985]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:57:53 compute-0 sudo[197985]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:53 compute-0 sudo[197985]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:54 compute-0 sudo[198012]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:57:54 compute-0 sudo[198012]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:54 compute-0 ceph-mon[75071]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:54 compute-0 python3.9[197994]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:54 compute-0 sudo[197984]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.303161024 +0000 UTC m=+0.042810879 container create 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:54 compute-0 systemd[1]: Started libpod-conmon-82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8.scope.
Dec 13 03:57:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.28492979 +0000 UTC m=+0.024579675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.392481081 +0000 UTC m=+0.132130966 container init 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.401962447 +0000 UTC m=+0.141612292 container start 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.405497042 +0000 UTC m=+0.145146917 container attach 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:57:54 compute-0 infallible_shtern[198137]: 167 167
Dec 13 03:57:54 compute-0 systemd[1]: libpod-82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8.scope: Deactivated successfully.
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.408091492 +0000 UTC m=+0.147741357 container died 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 03:57:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ec825f36fb463658c8a3511cee8b00eae8b7547bfbfeb47811f584a64142c8c-merged.mount: Deactivated successfully.
Dec 13 03:57:54 compute-0 podman[198073]: 2025-12-13 03:57:54.44756637 +0000 UTC m=+0.187216225 container remove 82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 03:57:54 compute-0 systemd[1]: libpod-conmon-82c07675c782ab98d8fe313935ff97ee7e00b6283eb042267030c401215984d8.scope: Deactivated successfully.
Dec 13 03:57:54 compute-0 sudo[198235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifxwxhzzishifrgubghoaxhvrhelfdnd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598274.302974-676-266345154195114/AnsiballZ_file.py'
Dec 13 03:57:54 compute-0 sudo[198235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:54 compute-0 podman[198242]: 2025-12-13 03:57:54.619743488 +0000 UTC m=+0.044158835 container create 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:57:54 compute-0 systemd[1]: Started libpod-conmon-4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2.scope.
Dec 13 03:57:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:54 compute-0 podman[198242]: 2025-12-13 03:57:54.601635788 +0000 UTC m=+0.026051155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d2396c1e1893f1d2371bb0a7b534ba670f477da61fd004536ff509eea5bf4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d2396c1e1893f1d2371bb0a7b534ba670f477da61fd004536ff509eea5bf4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d2396c1e1893f1d2371bb0a7b534ba670f477da61fd004536ff509eea5bf4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42d2396c1e1893f1d2371bb0a7b534ba670f477da61fd004536ff509eea5bf4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:54 compute-0 podman[198242]: 2025-12-13 03:57:54.725164061 +0000 UTC m=+0.149579418 container init 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 03:57:54 compute-0 podman[198242]: 2025-12-13 03:57:54.73179077 +0000 UTC m=+0.156206117 container start 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:57:54 compute-0 podman[198242]: 2025-12-13 03:57:54.741138522 +0000 UTC m=+0.165553939 container attach 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 03:57:54 compute-0 python3.9[198241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:54 compute-0 sudo[198235]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:57:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]: {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     "0": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "devices": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "/dev/loop3"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             ],
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_name": "ceph_lv0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_size": "21470642176",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "name": "ceph_lv0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "tags": {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_name": "ceph",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.crush_device_class": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.encrypted": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.objectstore": "bluestore",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_id": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.vdo": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.with_tpm": "0"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             },
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "vg_name": "ceph_vg0"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         }
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     ],
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     "1": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "devices": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "/dev/loop4"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             ],
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_name": "ceph_lv1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_size": "21470642176",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "name": "ceph_lv1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "tags": {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_name": "ceph",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.crush_device_class": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.encrypted": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.objectstore": "bluestore",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_id": "1",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.vdo": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.with_tpm": "0"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             },
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "vg_name": "ceph_vg1"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         }
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     ],
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     "2": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "devices": [
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "/dev/loop5"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             ],
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_name": "ceph_lv2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_size": "21470642176",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "name": "ceph_lv2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "tags": {
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.cluster_name": "ceph",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.crush_device_class": "",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.encrypted": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.objectstore": "bluestore",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osd_id": "2",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.vdo": "0",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:                 "ceph.with_tpm": "0"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             },
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "type": "block",
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:             "vg_name": "ceph_vg2"
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:         }
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]:     ]
Dec 13 03:57:55 compute-0 affectionate_ardinghelli[198258]: }
Dec 13 03:57:55 compute-0 systemd[1]: libpod-4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2.scope: Deactivated successfully.
Dec 13 03:57:55 compute-0 podman[198242]: 2025-12-13 03:57:55.063847892 +0000 UTC m=+0.488263239 container died 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-42d2396c1e1893f1d2371bb0a7b534ba670f477da61fd004536ff509eea5bf4c-merged.mount: Deactivated successfully.
Dec 13 03:57:55 compute-0 podman[198242]: 2025-12-13 03:57:55.112311084 +0000 UTC m=+0.536726431 container remove 4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_ardinghelli, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 03:57:55 compute-0 systemd[1]: libpod-conmon-4b5d47986165c192c0b89d9bd519b9afe997cbdaa7d114416c84895f6e5051e2.scope: Deactivated successfully.
Dec 13 03:57:55 compute-0 sudo[198012]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:55 compute-0 sudo[198407]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:57:55 compute-0 sudo[198452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rabklzxudyhimwvuhjrwyasjomuyplyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598274.9395938-676-247884448578229/AnsiballZ_file.py'
Dec 13 03:57:55 compute-0 sudo[198452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:55 compute-0 sudo[198407]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:55 compute-0 sudo[198407]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:55 compute-0 sudo[198457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:57:55 compute-0 sudo[198457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:55 compute-0 python3.9[198455]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:55 compute-0 sudo[198452]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.586494711 +0000 UTC m=+0.037763462 container create 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 03:57:55 compute-0 systemd[1]: Started libpod-conmon-3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225.scope.
Dec 13 03:57:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.65814716 +0000 UTC m=+0.109415941 container init 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.665073657 +0000 UTC m=+0.116342408 container start 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.57095387 +0000 UTC m=+0.022222641 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.66851505 +0000 UTC m=+0.119783821 container attach 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 03:57:55 compute-0 upbeat_haslett[198558]: 167 167
Dec 13 03:57:55 compute-0 systemd[1]: libpod-3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225.scope: Deactivated successfully.
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.672729314 +0000 UTC m=+0.123998075 container died 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 03:57:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-afdae2426cafc0e340e76db71ce188a2c88de5f1d678174f01a14846be95219e-merged.mount: Deactivated successfully.
Dec 13 03:57:55 compute-0 podman[198518]: 2025-12-13 03:57:55.707367371 +0000 UTC m=+0.158636122 container remove 3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 03:57:55 compute-0 systemd[1]: libpod-conmon-3e3ea45e05914b4df7c21e642aabfcb9003530efa5eba6d71452c0066a795225.scope: Deactivated successfully.
Dec 13 03:57:55 compute-0 sudo[198695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkvnxqwtqnwoiehwnrcjnzmeypgkfmrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598275.5996614-676-130734616091301/AnsiballZ_file.py'
Dec 13 03:57:55 compute-0 sudo[198695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:55 compute-0 podman[198666]: 2025-12-13 03:57:55.88144136 +0000 UTC m=+0.049661634 container create a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 03:57:55 compute-0 systemd[1]: Started libpod-conmon-a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706.scope.
Dec 13 03:57:55 compute-0 podman[198666]: 2025-12-13 03:57:55.855987891 +0000 UTC m=+0.024208215 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:57:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15634d5bba6d1f409fddc0e5b9363dc35d71870b91037109d637b2f3c199b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15634d5bba6d1f409fddc0e5b9363dc35d71870b91037109d637b2f3c199b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15634d5bba6d1f409fddc0e5b9363dc35d71870b91037109d637b2f3c199b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a15634d5bba6d1f409fddc0e5b9363dc35d71870b91037109d637b2f3c199b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:57:55 compute-0 podman[198666]: 2025-12-13 03:57:55.983406759 +0000 UTC m=+0.151627043 container init a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:55 compute-0 podman[198666]: 2025-12-13 03:57:55.991450526 +0000 UTC m=+0.159670800 container start a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:57:55 compute-0 podman[198666]: 2025-12-13 03:57:55.995485075 +0000 UTC m=+0.163705349 container attach a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:57:56 compute-0 python3.9[198699]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:56 compute-0 ceph-mon[75071]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:56 compute-0 sudo[198695]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:56 compute-0 sudo[198904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yljfzucbwzxtqbjchqwaoikkwgecruhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598276.2290606-676-265775966863118/AnsiballZ_file.py'
Dec 13 03:57:56 compute-0 sudo[198904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:56 compute-0 lvm[198932]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:57:56 compute-0 lvm[198933]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:57:56 compute-0 lvm[198933]: VG ceph_vg1 finished
Dec 13 03:57:56 compute-0 lvm[198932]: VG ceph_vg0 finished
Dec 13 03:57:56 compute-0 lvm[198935]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:57:56 compute-0 lvm[198935]: VG ceph_vg2 finished
Dec 13 03:57:56 compute-0 python3.9[198911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:56 compute-0 sudo[198904]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:56 compute-0 pedantic_turing[198702]: {}
Dec 13 03:57:56 compute-0 systemd[1]: libpod-a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706.scope: Deactivated successfully.
Dec 13 03:57:56 compute-0 systemd[1]: libpod-a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706.scope: Consumed 1.365s CPU time.
Dec 13 03:57:56 compute-0 podman[198666]: 2025-12-13 03:57:56.844936494 +0000 UTC m=+1.013156768 container died a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:57:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-7a15634d5bba6d1f409fddc0e5b9363dc35d71870b91037109d637b2f3c199b5-merged.mount: Deactivated successfully.
Dec 13 03:57:56 compute-0 podman[198666]: 2025-12-13 03:57:56.893510329 +0000 UTC m=+1.061730603 container remove a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 03:57:56 compute-0 systemd[1]: libpod-conmon-a93ffd97a27429c26e8fa0122e3e06440c236fc1ba4ec1fd7397dd0df910b706.scope: Deactivated successfully.
Dec 13 03:57:56 compute-0 sudo[198457]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:57:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:57:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:57 compute-0 sudo[199007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:57:57 compute-0 sudo[199007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:57:57 compute-0 sudo[199007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:57 compute-0 sudo[199126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otozzyxxmrzdxbbdvswqvdkztsoyluan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598276.9367356-676-143704062623478/AnsiballZ_file.py'
Dec 13 03:57:57 compute-0 sudo[199126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:57 compute-0 python3.9[199128]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:57 compute-0 sudo[199126]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:57 compute-0 sudo[199278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqjtjvpkcvhvbtqpanylukgdkvmwrjku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598277.6126878-676-270741948042672/AnsiballZ_file.py'
Dec 13 03:57:57 compute-0 sudo[199278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:57:57 compute-0 ceph-mon[75071]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:58 compute-0 python3.9[199280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:58 compute-0 sudo[199278]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:58 compute-0 sudo[199430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnuymbobgzyptzavmqyhgxuxuwfpttkw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598278.2317834-676-170687108340800/AnsiballZ_file.py'
Dec 13 03:57:58 compute-0 sudo[199430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:58 compute-0 python3.9[199432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:58 compute-0 sudo[199430]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:57:59 compute-0 sudo[199582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yblkwgpxmssmnxescunmxfugfybkyqqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598278.855552-676-134306125496102/AnsiballZ_file.py'
Dec 13 03:57:59 compute-0 sudo[199582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:57:59 compute-0 python3.9[199584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:57:59 compute-0 sudo[199582]: pam_unix(sudo:session): session closed for user root
Dec 13 03:57:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:00 compute-0 sudo[199734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psbqsxngiurfliwqsmussvcvmpbcvhef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598279.705965-676-161110737959626/AnsiballZ_file.py'
Dec 13 03:58:00 compute-0 sudo[199734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:00 compute-0 python3.9[199736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:00 compute-0 sudo[199734]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:00 compute-0 ceph-mon[75071]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:00 compute-0 sudo[199886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgsvnxmncermkpkgnkttpojxufnvonpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598280.378718-676-174300266173594/AnsiballZ_file.py'
Dec 13 03:58:00 compute-0 sudo[199886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:00 compute-0 python3.9[199888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:00 compute-0 sudo[199886]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:01 compute-0 sudo[200052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hscgupauhwmvbgcnpgncpzeqltvqkwmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598280.9950924-676-171586240825518/AnsiballZ_file.py'
Dec 13 03:58:01 compute-0 sudo[200052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:01 compute-0 podman[200012]: 2025-12-13 03:58:01.302628965 +0000 UTC m=+0.085574036 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 03:58:01 compute-0 python3.9[200060]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:01 compute-0 sudo[200052]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:01 compute-0 sudo[200218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfqhjwalhwpnrropjplotebelxyrahe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598281.6077914-676-127314080714615/AnsiballZ_file.py'
Dec 13 03:58:01 compute-0 sudo[200218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:01 compute-0 ceph-mon[75071]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:02 compute-0 python3.9[200220]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:02 compute-0 sudo[200218]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:03 compute-0 sudo[200370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwuzenfhywhlhdbdwrccqetfubrcefym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598282.2790134-775-11873613265086/AnsiballZ_stat.py'
Dec 13 03:58:03 compute-0 sudo[200370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:03 compute-0 python3.9[200372]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:03 compute-0 sudo[200370]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:03 compute-0 sudo[200493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kaskzgksoftdoxffndrswnezvzofejol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598282.2790134-775-11873613265086/AnsiballZ_copy.py'
Dec 13 03:58:03 compute-0 sudo[200493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:03 compute-0 python3.9[200495]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598282.2790134-775-11873613265086/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:03 compute-0 sudo[200493]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:04 compute-0 ceph-mon[75071]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:04 compute-0 sudo[200645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slwgxaayldfgqsssiyaqeydpaxqelfxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598283.9889445-775-138074850017572/AnsiballZ_stat.py'
Dec 13 03:58:04 compute-0 sudo[200645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:04 compute-0 python3.9[200647]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:04 compute-0 sudo[200645]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:04 compute-0 sudo[200768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izqsfqfexfysmqtctodstvhkumxtgkfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598283.9889445-775-138074850017572/AnsiballZ_copy.py'
Dec 13 03:58:04 compute-0 sudo[200768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:05 compute-0 python3.9[200770]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598283.9889445-775-138074850017572/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:05 compute-0 sudo[200768]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:05 compute-0 sudo[200920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgslqcyndptyylvkukebzgnwehqsjktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598285.2621744-775-227110193164783/AnsiballZ_stat.py'
Dec 13 03:58:05 compute-0 sudo[200920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:05 compute-0 python3.9[200922]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:05 compute-0 sudo[200920]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:06 compute-0 ceph-mon[75071]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:06 compute-0 sudo[201043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnypmcrsceirpczaayocirxnrcmsikpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598285.2621744-775-227110193164783/AnsiballZ_copy.py'
Dec 13 03:58:06 compute-0 sudo[201043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:06 compute-0 python3.9[201045]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598285.2621744-775-227110193164783/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:06 compute-0 sudo[201043]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:06 compute-0 sudo[201195]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrtnawzfbssvfpkygmgknnyidfoycyea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598286.441368-775-192138828050195/AnsiballZ_stat.py'
Dec 13 03:58:06 compute-0 sudo[201195]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:06 compute-0 python3.9[201197]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:06 compute-0 sudo[201195]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:07 compute-0 sudo[201318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crinxdqmqmyfwfwlndwnuljmksbezvai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598286.441368-775-192138828050195/AnsiballZ_copy.py'
Dec 13 03:58:07 compute-0 sudo[201318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:07 compute-0 podman[201320]: 2025-12-13 03:58:07.384111732 +0000 UTC m=+0.051098453 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 03:58:07 compute-0 python3.9[201321]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598286.441368-775-192138828050195/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:07 compute-0 sudo[201318]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:07 compute-0 sudo[201490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnrhjrrnbrjaqvlgkjkhbwnxkigarfgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598287.6840057-775-199395936613482/AnsiballZ_stat.py'
Dec 13 03:58:07 compute-0 sudo[201490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:08 compute-0 ceph-mon[75071]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:08 compute-0 python3.9[201492]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:08 compute-0 sudo[201490]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:08 compute-0 sudo[201613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avfhiylgmwenvpdyihugiyagmtwlcbrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598287.6840057-775-199395936613482/AnsiballZ_copy.py'
Dec 13 03:58:08 compute-0 sudo[201613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:08 compute-0 python3.9[201615]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598287.6840057-775-199395936613482/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:08 compute-0 sudo[201613]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:09 compute-0 sudo[201765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnryagzeuhcavdmrznbliqjgprwrkanf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598288.8690386-775-127358058631527/AnsiballZ_stat.py'
Dec 13 03:58:09 compute-0 sudo[201765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:09 compute-0 python3.9[201767]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:09 compute-0 sudo[201765]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:09 compute-0 sudo[201888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezylqbgcintgpzevkriiwmiujromhqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598288.8690386-775-127358058631527/AnsiballZ_copy.py'
Dec 13 03:58:09 compute-0 sudo[201888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:10 compute-0 python3.9[201890]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598288.8690386-775-127358058631527/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:10 compute-0 ceph-mon[75071]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:10 compute-0 sudo[201888]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:10 compute-0 sudo[202040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvkgudhkrpxxtisovetfxshlzzseyfoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598290.2310781-775-179520837643468/AnsiballZ_stat.py'
Dec 13 03:58:10 compute-0 sudo[202040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:10 compute-0 python3.9[202042]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:10 compute-0 sudo[202040]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:11 compute-0 sudo[202163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viiwiydozwelwzarsudxllqptieteklf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598290.2310781-775-179520837643468/AnsiballZ_copy.py'
Dec 13 03:58:11 compute-0 sudo[202163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:11 compute-0 python3.9[202165]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598290.2310781-775-179520837643468/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:11 compute-0 sudo[202163]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:11 compute-0 sudo[202315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fswigaddhklkzzuauqrxqtuvlymvkvne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598291.593852-775-217326105737709/AnsiballZ_stat.py'
Dec 13 03:58:11 compute-0 sudo[202315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:12 compute-0 ceph-mon[75071]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:12 compute-0 python3.9[202317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:12 compute-0 sudo[202315]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:12 compute-0 sudo[202438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucrxqbbkrrhuzrvrtcinkytzagrfnljh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598291.593852-775-217326105737709/AnsiballZ_copy.py'
Dec 13 03:58:12 compute-0 sudo[202438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:12 compute-0 python3.9[202440]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598291.593852-775-217326105737709/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:12 compute-0 sudo[202438]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:13 compute-0 sudo[202590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xghpknbqepdcjbjkldygvmxvkmsrpubm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598292.8270435-775-147768499990304/AnsiballZ_stat.py'
Dec 13 03:58:13 compute-0 sudo[202590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:13 compute-0 python3.9[202592]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:13 compute-0 sudo[202590]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:13 compute-0 sudo[202713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfnmpbbwplwigpkkuvgkeljozkisphav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598292.8270435-775-147768499990304/AnsiballZ_copy.py'
Dec 13 03:58:13 compute-0 sudo[202713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:13 compute-0 python3.9[202715]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598292.8270435-775-147768499990304/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:13 compute-0 sudo[202713]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:14 compute-0 sudo[202865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyteaeeufixpgsadqixthgbnvhjkxwhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598294.0386043-775-217369615340078/AnsiballZ_stat.py'
Dec 13 03:58:14 compute-0 sudo[202865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:14 compute-0 python3.9[202867]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:14 compute-0 sudo[202865]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:14 compute-0 ceph-mon[75071]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:15 compute-0 sudo[202988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqjixxsvqsintbmcwbdljpqzhyrisyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598294.0386043-775-217369615340078/AnsiballZ_copy.py'
Dec 13 03:58:15 compute-0 sudo[202988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:15 compute-0 python3.9[202990]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598294.0386043-775-217369615340078/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:15 compute-0 sudo[202988]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:15 compute-0 sudo[203140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmsykbqurdvmakurehouplegpwdtjgrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598295.3716724-775-86944285628196/AnsiballZ_stat.py'
Dec 13 03:58:15 compute-0 sudo[203140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:15 compute-0 python3.9[203142]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:15 compute-0 sudo[203140]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:15 compute-0 ceph-mon[75071]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:16 compute-0 sudo[203263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urcembgwjgcbmtbeimdjmtzibikwsnvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598295.3716724-775-86944285628196/AnsiballZ_copy.py'
Dec 13 03:58:16 compute-0 sudo[203263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:16 compute-0 python3.9[203265]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598295.3716724-775-86944285628196/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:16 compute-0 sudo[203263]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:16 compute-0 sudo[203415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umvcnpmukaokmtyeqpnvjkrnltlddrto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598296.5279295-775-187885397377619/AnsiballZ_stat.py'
Dec 13 03:58:16 compute-0 sudo[203415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:16 compute-0 python3.9[203417]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:16 compute-0 sudo[203415]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:17 compute-0 sudo[203538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igrzszxugifhfdcchpvqnprrydnsvbgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598296.5279295-775-187885397377619/AnsiballZ_copy.py'
Dec 13 03:58:17 compute-0 sudo[203538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:17 compute-0 python3.9[203540]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598296.5279295-775-187885397377619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:17 compute-0 sudo[203538]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:17 compute-0 sudo[203690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atehrrowluiauqxvmpqmhhdpxzaiuxvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598297.6587873-775-116684200167897/AnsiballZ_stat.py'
Dec 13 03:58:17 compute-0 sudo[203690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:18 compute-0 ceph-mon[75071]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:18 compute-0 python3.9[203692]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:18 compute-0 sudo[203690]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:18 compute-0 sudo[203813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyvuiwhqwyvfovlmsapltcknzpsbqzpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598297.6587873-775-116684200167897/AnsiballZ_copy.py'
Dec 13 03:58:18 compute-0 sudo[203813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:18 compute-0 python3.9[203815]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598297.6587873-775-116684200167897/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:18 compute-0 sudo[203813]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:19 compute-0 sudo[203965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdkmzvqxvqkkpyckdkvnqnoyjrzlkkiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598298.8241599-775-20886809424948/AnsiballZ_stat.py'
Dec 13 03:58:19 compute-0 sudo[203965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:19 compute-0 python3.9[203967]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:19 compute-0 sudo[203965]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:19 compute-0 sudo[204088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loflubzmxjqmfwpcnuquhotlsdyizzyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598298.8241599-775-20886809424948/AnsiballZ_copy.py'
Dec 13 03:58:19 compute-0 sudo[204088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:19 compute-0 python3.9[204090]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598298.8241599-775-20886809424948/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:19 compute-0 sudo[204088]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:20 compute-0 ceph-mon[75071]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:20 compute-0 python3.9[204240]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:21 compute-0 sudo[204393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blmgnvlhxukmjhgxagepfgrsrbxmhmtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598300.7525733-981-263803499392761/AnsiballZ_seboolean.py'
Dec 13 03:58:21 compute-0 sudo[204393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:21 compute-0 python3.9[204395]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec 13 03:58:22 compute-0 ceph-mon[75071]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:23 compute-0 sudo[204393]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:23 compute-0 sudo[204549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhikejwpazuoqwnrrliuxzyjiofothfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598303.2413564-989-102209843288355/AnsiballZ_copy.py'
Dec 13 03:58:23 compute-0 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec 13 03:58:23 compute-0 sudo[204549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:23 compute-0 python3.9[204551]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:23 compute-0 sudo[204549]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:23 compute-0 ceph-mon[75071]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:24 compute-0 sudo[204701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vptgdryvyflmkchuzhhvwgromuovqtgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598303.829283-989-177030977150858/AnsiballZ_copy.py'
Dec 13 03:58:24 compute-0 sudo[204701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:24 compute-0 python3.9[204703]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:24 compute-0 sudo[204701]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:24 compute-0 auditd[702]: Audit daemon rotating log files
Dec 13 03:58:24 compute-0 sudo[204853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orztjdtcdtepnoczheoxleiajnnqbylu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598304.457271-989-157244714651110/AnsiballZ_copy.py'
Dec 13 03:58:24 compute-0 sudo[204853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:24 compute-0 python3.9[204855]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:24 compute-0 sudo[204853]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:25 compute-0 sudo[205005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsjxzannodrrzbmrippjddtjyhmpeunb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598305.082156-989-174668675993185/AnsiballZ_copy.py'
Dec 13 03:58:25 compute-0 sudo[205005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:25 compute-0 python3.9[205007]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:25 compute-0 sudo[205005]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:25 compute-0 sudo[205157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acupswkinirttsvhnmltwdcmzbxgxvso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598305.6868768-989-275010594035020/AnsiballZ_copy.py'
Dec 13 03:58:25 compute-0 sudo[205157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:26 compute-0 ceph-mon[75071]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:26 compute-0 python3.9[205159]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:26 compute-0 sudo[205157]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:26 compute-0 sudo[205309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjvglnpqkgcfmndlreduwelmpcbxxogp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598306.3543792-1025-144566273858140/AnsiballZ_copy.py'
Dec 13 03:58:26 compute-0 sudo[205309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:26 compute-0 python3.9[205311]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:26 compute-0 sudo[205309]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:27 compute-0 sudo[205461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zeiubdswdskqvkufzwcjztuobyzgjosw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598306.9356456-1025-146425428880877/AnsiballZ_copy.py'
Dec 13 03:58:27 compute-0 sudo[205461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:27 compute-0 python3.9[205463]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:27 compute-0 sudo[205461]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:28 compute-0 sudo[205613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvdrytjqfjofrtqmpwtgepsaedmfwwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598307.536911-1025-168121190098730/AnsiballZ_copy.py'
Dec 13 03:58:28 compute-0 sudo[205613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:28 compute-0 ceph-mon[75071]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:28 compute-0 python3.9[205615]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:28 compute-0 sudo[205613]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:28 compute-0 sudo[205765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlftpdnurldgntglkqhhqcrufeglsdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598308.3571968-1025-5640261095224/AnsiballZ_copy.py'
Dec 13 03:58:28 compute-0 sudo[205765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:28 compute-0 python3.9[205767]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:28 compute-0 sudo[205765]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:29 compute-0 sudo[205917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kludidswafllebimmauwofklnnusvica ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598308.9600878-1025-180676996095975/AnsiballZ_copy.py'
Dec 13 03:58:29 compute-0 sudo[205917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:29 compute-0 python3.9[205919]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:29 compute-0 sudo[205917]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:29 compute-0 sudo[206069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ockssfpgzduhakuczrjjapznrxgyvgua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598309.582844-1061-162461033263487/AnsiballZ_systemd.py'
Dec 13 03:58:29 compute-0 sudo[206069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:30 compute-0 ceph-mon[75071]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:30 compute-0 python3.9[206071]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:58:30 compute-0 systemd[1]: Reloading.
Dec 13 03:58:30 compute-0 systemd-rc-local-generator[206098]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:58:30 compute-0 systemd-sysv-generator[206102]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:58:30 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Dec 13 03:58:30 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Dec 13 03:58:30 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Dec 13 03:58:30 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec 13 03:58:30 compute-0 systemd[1]: Starting libvirt logging daemon...
Dec 13 03:58:30 compute-0 systemd[1]: Started libvirt logging daemon.
Dec 13 03:58:30 compute-0 sudo[206069]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:31 compute-0 sudo[206261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdxodylhygqakpnfdgzrmfnkbyryiovg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598310.8242424-1061-18650805109345/AnsiballZ_systemd.py'
Dec 13 03:58:31 compute-0 sudo[206261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:31 compute-0 python3.9[206263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:58:31 compute-0 systemd[1]: Reloading.
Dec 13 03:58:31 compute-0 systemd-sysv-generator[206314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:58:31 compute-0 systemd-rc-local-generator[206310]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:58:31 compute-0 podman[206265]: 2025-12-13 03:58:31.569890178 +0000 UTC m=+0.128129223 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 03:58:31 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Dec 13 03:58:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec 13 03:58:31 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec 13 03:58:31 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec 13 03:58:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec 13 03:58:31 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec 13 03:58:31 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 03:58:31 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 13 03:58:31 compute-0 sudo[206261]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:32 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec 13 03:58:32 compute-0 ceph-mon[75071]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:32 compute-0 sudo[206505]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmbppctbyvxzrmauzizwkdwkbxlbvabg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598312.010217-1061-25500558179113/AnsiballZ_systemd.py'
Dec 13 03:58:32 compute-0 sudo[206505]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:32 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec 13 03:58:32 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec 13 03:58:32 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec 13 03:58:32 compute-0 python3.9[206507]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:58:32 compute-0 systemd[1]: Reloading.
Dec 13 03:58:32 compute-0 systemd-sysv-generator[206545]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:58:32 compute-0 systemd-rc-local-generator[206539]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:58:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:33 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec 13 03:58:33 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec 13 03:58:33 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec 13 03:58:33 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec 13 03:58:33 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec 13 03:58:33 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec 13 03:58:33 compute-0 sudo[206505]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:33 compute-0 setroubleshoot[206402]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 306ed9ea-dc54-4f07-9f32-06346e753efb
Dec 13 03:58:33 compute-0 setroubleshoot[206402]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 13 03:58:33 compute-0 setroubleshoot[206402]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 306ed9ea-dc54-4f07-9f32-06346e753efb
Dec 13 03:58:33 compute-0 setroubleshoot[206402]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Dec 13 03:58:33 compute-0 sudo[206726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sylfguolmigxjjhcdmhvnvljcigfeofq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598313.2713737-1061-91758599333405/AnsiballZ_systemd.py'
Dec 13 03:58:33 compute-0 sudo[206726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:33 compute-0 python3.9[206728]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:58:33 compute-0 systemd[1]: Reloading.
Dec 13 03:58:33 compute-0 systemd-rc-local-generator[206753]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:58:33 compute-0 systemd-sysv-generator[206758]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:58:34 compute-0 ceph-mon[75071]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:34 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Dec 13 03:58:34 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Dec 13 03:58:34 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec 13 03:58:34 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec 13 03:58:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec 13 03:58:34 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec 13 03:58:34 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec 13 03:58:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec 13 03:58:34 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec 13 03:58:34 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec 13 03:58:34 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 03:58:34 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 13 03:58:34 compute-0 sudo[206726]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:34 compute-0 sudo[206940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atvlaslnqfieczfwnpycyqywycskeaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598314.390065-1061-22797305570199/AnsiballZ_systemd.py'
Dec 13 03:58:34 compute-0 sudo[206940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:34 compute-0 python3.9[206942]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:58:34 compute-0 systemd[1]: Reloading.
Dec 13 03:58:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:35 compute-0 systemd-rc-local-generator[206967]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:58:35 compute-0 systemd-sysv-generator[206970]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:58:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:58:35.070 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:58:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:58:35.071 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:58:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:58:35.072 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:58:35 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Dec 13 03:58:35 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Dec 13 03:58:35 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Dec 13 03:58:35 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec 13 03:58:35 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec 13 03:58:35 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec 13 03:58:35 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 13 03:58:35 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 13 03:58:35 compute-0 sudo[206940]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:35 compute-0 sudo[207152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpeaptitwiqkpdvskiklegljkfpvjlhf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598315.6426096-1098-2030489857621/AnsiballZ_file.py'
Dec 13 03:58:35 compute-0 sudo[207152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:36 compute-0 ceph-mon[75071]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:36 compute-0 python3.9[207154]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:36 compute-0 sudo[207152]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:36 compute-0 sudo[207304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmmswcmznxjdctflhruzcpumhgoczfzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598316.3108718-1106-113854435039192/AnsiballZ_find.py'
Dec 13 03:58:36 compute-0 sudo[207304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:36 compute-0 python3.9[207306]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 03:58:36 compute-0 sudo[207304]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:37 compute-0 sudo[207456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eoiaymbhscpeedgymmcfikqcwpboooxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598316.9417186-1114-220409366186624/AnsiballZ_command.py'
Dec 13 03:58:37 compute-0 sudo[207456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:37 compute-0 python3.9[207458]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:37 compute-0 sudo[207456]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:37 compute-0 podman[207580]: 2025-12-13 03:58:37.94709422 +0000 UTC m=+0.087037496 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 03:58:38 compute-0 python3.9[207628]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 03:58:38 compute-0 ceph-mon[75071]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:38 compute-0 python3.9[207781]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:39 compute-0 python3.9[207902]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598318.3717442-1133-32089711486081/.source.xml follow=False _original_basename=secret.xml.j2 checksum=9cf04356510dc2a1b6b9d820f211b4d556842941 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:39 compute-0 sudo[208052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvrmnaguyjjaznpicnrmhxbgncstjixg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598319.5210397-1148-196534022390512/AnsiballZ_command.py'
Dec 13 03:58:39 compute-0 sudo[208052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:39 compute-0 python3.9[208054]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 437a9f04-06b7-56e3-8a4b-f52a1199dd32
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:40 compute-0 polkitd[43382]: Registered Authentication Agent for unix-process:208056:305871 (system bus name :1.2574 [pkttyagent --process 208056 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 03:58:40 compute-0 polkitd[43382]: Unregistered Authentication Agent for unix-process:208056:305871 (system bus name :1.2574, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 03:58:40 compute-0 polkitd[43382]: Registered Authentication Agent for unix-process:208055:305870 (system bus name :1.2575 [pkttyagent --process 208055 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 03:58:40 compute-0 polkitd[43382]: Unregistered Authentication Agent for unix-process:208055:305870 (system bus name :1.2575, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 03:58:40 compute-0 sudo[208052]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:40 compute-0 ceph-mon[75071]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:58:40
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['volumes', 'default.rgw.meta', 'images', 'default.rgw.log', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:58:40 compute-0 python3.9[208216]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:41 compute-0 sudo[208366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-indksbrwdexesgovovktbfelxzwlqpop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598320.9500873-1164-27727842239450/AnsiballZ_command.py'
Dec 13 03:58:41 compute-0 sudo[208366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:41 compute-0 sudo[208366]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:41 compute-0 sudo[208519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytkmttilmjvxabdxtthkxrebixxrjduw ; FSID=437a9f04-06b7-56e3-8a4b-f52a1199dd32 KEY=AQCT4DxpAAAAABAAxBrRSbggkwGSJCw4erm++Q== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598321.5786538-1172-171962458005853/AnsiballZ_command.py'
Dec 13 03:58:41 compute-0 sudo[208519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:42 compute-0 polkitd[43382]: Registered Authentication Agent for unix-process:208522:306079 (system bus name :1.2578 [pkttyagent --process 208522 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Dec 13 03:58:42 compute-0 polkitd[43382]: Unregistered Authentication Agent for unix-process:208522:306079 (system bus name :1.2578, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Dec 13 03:58:42 compute-0 sudo[208519]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:42 compute-0 ceph-mon[75071]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:58:42 compute-0 sudo[208677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sisdnpvpvomrfozrprwqxrwibnwipyel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598322.3084977-1180-63378114707906/AnsiballZ_copy.py'
Dec 13 03:58:42 compute-0 sudo[208677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:42 compute-0 python3.9[208679]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:42 compute-0 sudo[208677]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:43 compute-0 sudo[208829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lffhrnlmyaspyfthukmtfzqjcsfcolke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598322.994142-1188-149744616892562/AnsiballZ_stat.py'
Dec 13 03:58:43 compute-0 sudo[208829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:43 compute-0 python3.9[208831]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:43 compute-0 sudo[208829]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:43 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec 13 03:58:43 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.007s CPU time.
Dec 13 03:58:43 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec 13 03:58:43 compute-0 sudo[208952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emfjjkqbtxvsduoasqyulwleimjmkpjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598322.994142-1188-149744616892562/AnsiballZ_copy.py'
Dec 13 03:58:43 compute-0 sudo[208952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:44 compute-0 python3.9[208954]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598322.994142-1188-149744616892562/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:44 compute-0 sudo[208952]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:44 compute-0 ceph-mon[75071]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:44 compute-0 sudo[209104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcrtdjazrsphafmfbhocyvqzdxoccowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598324.3125658-1204-79323417436778/AnsiballZ_file.py'
Dec 13 03:58:44 compute-0 sudo[209104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:44 compute-0 python3.9[209106]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:44 compute-0 sudo[209104]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:45 compute-0 sudo[209256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwjkyenmuxmxobycyvtlodfnkicodfyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598324.9700913-1212-159577935695698/AnsiballZ_stat.py'
Dec 13 03:58:45 compute-0 sudo[209256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:45 compute-0 python3.9[209258]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:45 compute-0 sudo[209256]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:45 compute-0 sudo[209334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzohrtyxmjamnkcveqigcodqbgidwdhp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598324.9700913-1212-159577935695698/AnsiballZ_file.py'
Dec 13 03:58:45 compute-0 sudo[209334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:45 compute-0 python3.9[209336]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:45 compute-0 sudo[209334]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:46 compute-0 ceph-mon[75071]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:46 compute-0 sudo[209486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zufufkczxockxlznsdeidiqkcwewxvmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598326.1149166-1224-15950809985250/AnsiballZ_stat.py'
Dec 13 03:58:46 compute-0 sudo[209486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:46 compute-0 python3.9[209488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:46 compute-0 sudo[209486]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:46 compute-0 sudo[209564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjgrimjpxwvtiggmctbdxivyyukwzpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598326.1149166-1224-15950809985250/AnsiballZ_file.py'
Dec 13 03:58:46 compute-0 sudo[209564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:47 compute-0 python3.9[209566]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.m42763t7 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:47 compute-0 sudo[209564]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:47 compute-0 sudo[209716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqwpyljqlemazbssznaeyhqkrzbgymol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598327.1932871-1236-154645458453589/AnsiballZ_stat.py'
Dec 13 03:58:47 compute-0 sudo[209716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:47 compute-0 python3.9[209718]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:47 compute-0 sudo[209716]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:47 compute-0 sudo[209794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpenrmmkjpwhyozowlveygckcmqodfwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598327.1932871-1236-154645458453589/AnsiballZ_file.py'
Dec 13 03:58:47 compute-0 sudo[209794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:48 compute-0 python3.9[209796]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:48 compute-0 sudo[209794]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:48 compute-0 ceph-mon[75071]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:48 compute-0 sudo[209946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgbfsqvczrebrbjukokufzyxoqvapnvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598328.2897296-1249-1885139005293/AnsiballZ_command.py'
Dec 13 03:58:48 compute-0 sudo[209946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:48 compute-0 python3.9[209948]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:48 compute-0 sudo[209946]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:49 compute-0 sudo[210099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkllpxfxbipioycwuososqenhwdchadz ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765598328.9299276-1257-155051786168736/AnsiballZ_edpm_nftables_from_files.py'
Dec 13 03:58:49 compute-0 sudo[210099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:49 compute-0 python3[210101]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec 13 03:58:49 compute-0 sudo[210099]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:50 compute-0 sudo[210251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edjecxwmjoqlhkkduauyyghnshbfomxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598329.7569897-1265-276904451647545/AnsiballZ_stat.py'
Dec 13 03:58:50 compute-0 sudo[210251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:50 compute-0 ceph-mon[75071]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:50 compute-0 python3.9[210253]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:50 compute-0 sudo[210251]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:50 compute-0 sudo[210329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icnvbyzdtawzbtqkkrvfuswaectpxqce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598329.7569897-1265-276904451647545/AnsiballZ_file.py'
Dec 13 03:58:50 compute-0 sudo[210329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:50 compute-0 python3.9[210331]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:50 compute-0 sudo[210329]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:51 compute-0 sudo[210481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwakgnbwoioxzurxbwnbtjpcddshymuw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598330.9177146-1277-223395173813548/AnsiballZ_stat.py'
Dec 13 03:58:51 compute-0 sudo[210481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:51 compute-0 python3.9[210483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:51 compute-0 sudo[210481]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:51 compute-0 sudo[210559]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndyombbsaezzjivudegirseibuaxuium ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598330.9177146-1277-223395173813548/AnsiballZ_file.py'
Dec 13 03:58:51 compute-0 sudo[210559]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:51 compute-0 python3.9[210561]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:51 compute-0 sudo[210559]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:52 compute-0 ceph-mon[75071]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:58:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:58:52 compute-0 sudo[210711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwpczxnkutbbpeitgkmhcxfetwdqubsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598331.986365-1289-90504291158055/AnsiballZ_stat.py'
Dec 13 03:58:52 compute-0 sudo[210711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:52 compute-0 python3.9[210713]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:52 compute-0 sudo[210711]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:52 compute-0 sudo[210789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-retglixfczbunjycicjwnhbarwoioxoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598331.986365-1289-90504291158055/AnsiballZ_file.py'
Dec 13 03:58:52 compute-0 sudo[210789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:52 compute-0 python3.9[210791]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:52 compute-0 sudo[210789]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:53 compute-0 sudo[210941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehwvamkskyhdddbqnrrloclzahxzljbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598333.1043124-1301-48202847191246/AnsiballZ_stat.py'
Dec 13 03:58:53 compute-0 sudo[210941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:53 compute-0 python3.9[210943]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:53 compute-0 sudo[210941]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:53 compute-0 sudo[211019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leqjcmfmkxaxpblmigrhcrtavaohtlkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598333.1043124-1301-48202847191246/AnsiballZ_file.py'
Dec 13 03:58:53 compute-0 sudo[211019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:54 compute-0 python3.9[211021]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:54 compute-0 sudo[211019]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:54 compute-0 ceph-mon[75071]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:54 compute-0 sudo[211171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbjmlcxbsrxtmhrzjfokqjzaqkotnctu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598334.392651-1313-43791014963224/AnsiballZ_stat.py'
Dec 13 03:58:54 compute-0 sudo[211171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:55 compute-0 python3.9[211173]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:58:55 compute-0 sudo[211171]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:55 compute-0 sudo[211296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abzavyebhzekgxgthgchlacirnrasfug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598334.392651-1313-43791014963224/AnsiballZ_copy.py'
Dec 13 03:58:55 compute-0 sudo[211296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:55 compute-0 python3.9[211298]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765598334.392651-1313-43791014963224/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:55 compute-0 sudo[211296]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:56 compute-0 sudo[211448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtgkddxazdrvoftdtzureshexbhmbxuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598335.8093467-1328-205275025527600/AnsiballZ_file.py'
Dec 13 03:58:56 compute-0 sudo[211448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:56 compute-0 ceph-mon[75071]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:56 compute-0 python3.9[211450]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:56 compute-0 sudo[211448]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:56 compute-0 sudo[211600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxlflscnhemraqummasitetbuyolgopk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598336.552253-1336-125273002056353/AnsiballZ_command.py'
Dec 13 03:58:56 compute-0 sudo[211600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:57 compute-0 python3.9[211602]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:57 compute-0 sudo[211600]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:57 compute-0 sudo[211606]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:58:57 compute-0 sudo[211606]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:57 compute-0 sudo[211606]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:57 compute-0 sudo[211637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 03:58:57 compute-0 sudo[211637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:57 compute-0 sudo[211829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gduiydpoypbrhechsyotjkcmewfuzfux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598337.2392616-1344-86511761162497/AnsiballZ_blockinfile.py'
Dec 13 03:58:57 compute-0 sudo[211829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:57 compute-0 sudo[211637]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:58:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 03:58:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:58:57 compute-0 sudo[211840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:58:57 compute-0 sudo[211840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:57 compute-0 sudo[211840]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:57 compute-0 python3.9[211839]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:58:57 compute-0 sudo[211865]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 03:58:57 compute-0 sudo[211865]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:57 compute-0 sudo[211829]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.137980632 +0000 UTC m=+0.045022784 container create 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:58:58 compute-0 systemd[1]: Started libpod-conmon-808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125.scope.
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.115642785 +0000 UTC m=+0.022684957 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:58:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:58:58 compute-0 ceph-mon[75071]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 03:58:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.231642847 +0000 UTC m=+0.138685029 container init 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.239399587 +0000 UTC m=+0.146441739 container start 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.241943457 +0000 UTC m=+0.148985619 container attach 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:58:58 compute-0 flamboyant_aryabhata[212002]: 167 167
Dec 13 03:58:58 compute-0 systemd[1]: libpod-808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125.scope: Deactivated successfully.
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.246673846 +0000 UTC m=+0.153715988 container died 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 03:58:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-733e3e23caec1d30dbedb77713fce1de8dad0aac93f358b48eb7231af27fda83-merged.mount: Deactivated successfully.
Dec 13 03:58:58 compute-0 podman[211948]: 2025-12-13 03:58:58.289515189 +0000 UTC m=+0.196557341 container remove 808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 03:58:58 compute-0 systemd[1]: libpod-conmon-808a4d2d016bf816d0349260883b06ea10c09f7be8587f87ba74b4c1bf9b4125.scope: Deactivated successfully.
Dec 13 03:58:58 compute-0 sudo[212087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grpjnkhugazzwlbfqxnejzlkixdkzrfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598338.0897748-1353-263586787175298/AnsiballZ_command.py'
Dec 13 03:58:58 compute-0 sudo[212087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:58 compute-0 podman[212095]: 2025-12-13 03:58:58.453366731 +0000 UTC m=+0.044678655 container create 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 03:58:58 compute-0 systemd[1]: Started libpod-conmon-03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829.scope.
Dec 13 03:58:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 03:58:58 compute-0 podman[212095]: 2025-12-13 03:58:58.435821145 +0000 UTC m=+0.027133089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:58:58 compute-0 podman[212095]: 2025-12-13 03:58:58.541513007 +0000 UTC m=+0.132824951 container init 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:58:58 compute-0 python3.9[212089]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:58:58 compute-0 podman[212095]: 2025-12-13 03:58:58.549323569 +0000 UTC m=+0.140635493 container start 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 03:58:58 compute-0 podman[212095]: 2025-12-13 03:58:58.554717456 +0000 UTC m=+0.146029390 container attach 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 03:58:58 compute-0 sudo[212087]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:58:59 compute-0 naughty_poincare[212112]: --> passed data devices: 0 physical, 3 LVM
Dec 13 03:58:59 compute-0 naughty_poincare[212112]: --> All data devices are unavailable
Dec 13 03:58:59 compute-0 systemd[1]: libpod-03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829.scope: Deactivated successfully.
Dec 13 03:58:59 compute-0 podman[212095]: 2025-12-13 03:58:59.179865102 +0000 UTC m=+0.771177016 container died 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5e4f0a5011f2ea0087acb34b7430c986c24a41ac84f601058dc6d614a78a49fa-merged.mount: Deactivated successfully.
Dec 13 03:58:59 compute-0 podman[212095]: 2025-12-13 03:58:59.223142878 +0000 UTC m=+0.814454802 container remove 03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 03:58:59 compute-0 systemd[1]: libpod-conmon-03ac8fe54605a425a530092b900ae6b68a54df935c43797a4197d5efdeda8829.scope: Deactivated successfully.
Dec 13 03:58:59 compute-0 sudo[212293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oocdstwexjzyijcdvaxijbzrkyvffmld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598338.8395677-1361-208424538904164/AnsiballZ_stat.py'
Dec 13 03:58:59 compute-0 sudo[212293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:58:59 compute-0 sudo[211865]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:59 compute-0 sudo[212296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:58:59 compute-0 sudo[212296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:59 compute-0 sudo[212296]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:59 compute-0 sudo[212321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 03:58:59 compute-0 sudo[212321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:58:59 compute-0 python3.9[212295]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:58:59 compute-0 sudo[212293]: pam_unix(sudo:session): session closed for user root
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.702694978 +0000 UTC m=+0.051231433 container create a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 03:58:59 compute-0 systemd[1]: Started libpod-conmon-a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73.scope.
Dec 13 03:58:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.675339065 +0000 UTC m=+0.023875550 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.776749091 +0000 UTC m=+0.125285546 container init a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.783935406 +0000 UTC m=+0.132471861 container start a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:58:59 compute-0 festive_northcutt[212475]: 167 167
Dec 13 03:58:59 compute-0 systemd[1]: libpod-a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73.scope: Deactivated successfully.
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.789643411 +0000 UTC m=+0.138179876 container attach a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.790160155 +0000 UTC m=+0.138696620 container died a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 03:58:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-45c94c9f2cc5b2e7950a985db31131bf4ed8c0148cfa18e3c5ae82d71f43c807-merged.mount: Deactivated successfully.
Dec 13 03:58:59 compute-0 podman[212414]: 2025-12-13 03:58:59.831754695 +0000 UTC m=+0.180291150 container remove a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:58:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:58:59 compute-0 systemd[1]: libpod-conmon-a7184fad266ff0e544b7a9f8061ea970199fc454159777cd952cbfa3785c9e73.scope: Deactivated successfully.
Dec 13 03:58:59 compute-0 sudo[212544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doxuoujezosszwhbtvyqpclciugmdpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598339.6087945-1369-73394859891821/AnsiballZ_command.py'
Dec 13 03:58:59 compute-0 sudo[212544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:58:59.985589605 +0000 UTC m=+0.025346450 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:00 compute-0 python3.9[212546]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.103343695 +0000 UTC m=+0.143100530 container create 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:59:00 compute-0 systemd[1]: Started libpod-conmon-5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68.scope.
Dec 13 03:59:00 compute-0 sudo[212544]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f57386bb55dd69804bd79e0b773f275dca6428bc1e6ef6a21fd635f7e18efe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f57386bb55dd69804bd79e0b773f275dca6428bc1e6ef6a21fd635f7e18efe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f57386bb55dd69804bd79e0b773f275dca6428bc1e6ef6a21fd635f7e18efe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f57386bb55dd69804bd79e0b773f275dca6428bc1e6ef6a21fd635f7e18efe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.204585795 +0000 UTC m=+0.244342660 container init 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.215665107 +0000 UTC m=+0.255421932 container start 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.218441092 +0000 UTC m=+0.258197967 container attach 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:00 compute-0 ceph-mon[75071]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]: {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     "0": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "devices": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "/dev/loop3"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             ],
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_name": "ceph_lv0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_size": "21470642176",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "name": "ceph_lv0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "tags": {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_name": "ceph",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.crush_device_class": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.encrypted": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.objectstore": "bluestore",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_id": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.vdo": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.with_tpm": "0"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             },
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "vg_name": "ceph_vg0"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         }
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     ],
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     "1": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "devices": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "/dev/loop4"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             ],
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_name": "ceph_lv1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_size": "21470642176",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "name": "ceph_lv1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "tags": {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_name": "ceph",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.crush_device_class": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.encrypted": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.objectstore": "bluestore",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_id": "1",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.vdo": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.with_tpm": "0"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             },
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "vg_name": "ceph_vg1"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         }
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     ],
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     "2": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "devices": [
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "/dev/loop5"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             ],
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_name": "ceph_lv2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_size": "21470642176",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "name": "ceph_lv2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "tags": {
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.cluster_name": "ceph",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.crush_device_class": "",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.encrypted": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.objectstore": "bluestore",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osd_id": "2",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.vdo": "0",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:                 "ceph.with_tpm": "0"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             },
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "type": "block",
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:             "vg_name": "ceph_vg2"
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:         }
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]:     ]
Dec 13 03:59:00 compute-0 wizardly_wozniak[212571]: }
Dec 13 03:59:00 compute-0 systemd[1]: libpod-5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68.scope: Deactivated successfully.
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.518937988 +0000 UTC m=+0.558694833 container died 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 03:59:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f57386bb55dd69804bd79e0b773f275dca6428bc1e6ef6a21fd635f7e18efe-merged.mount: Deactivated successfully.
Dec 13 03:59:00 compute-0 podman[212552]: 2025-12-13 03:59:00.567667591 +0000 UTC m=+0.607424416 container remove 5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_wozniak, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 03:59:00 compute-0 systemd[1]: libpod-conmon-5762ede92049ae754a9480c2cd4dc786ee0ce66bce90898eccb02f30f4675a68.scope: Deactivated successfully.
Dec 13 03:59:00 compute-0 sudo[212321]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:00 compute-0 sudo[212741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iveiotfpxkdhibqeitceepklismuhtei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598340.3386893-1377-128840220870062/AnsiballZ_file.py'
Dec 13 03:59:00 compute-0 sudo[212741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:00 compute-0 sudo[212743]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 03:59:00 compute-0 sudo[212743]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:59:00 compute-0 sudo[212743]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:00 compute-0 sudo[212769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 03:59:00 compute-0 sudo[212769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:59:00 compute-0 python3.9[212744]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:00 compute-0 sudo[212741]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.025541744 +0000 UTC m=+0.040856272 container create c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 03:59:01 compute-0 systemd[1]: Started libpod-conmon-c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557.scope.
Dec 13 03:59:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.097430366 +0000 UTC m=+0.112744924 container init c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.007369379 +0000 UTC m=+0.022683927 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.104096698 +0000 UTC m=+0.119411226 container start c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.106588705 +0000 UTC m=+0.121903253 container attach c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 03:59:01 compute-0 confident_williamson[212888]: 167 167
Dec 13 03:59:01 compute-0 systemd[1]: libpod-c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557.scope: Deactivated successfully.
Dec 13 03:59:01 compute-0 conmon[212888]: conmon c70dab214d679f8c66b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557.scope/container/memory.events
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.109730931 +0000 UTC m=+0.125045469 container died c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 03:59:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-30f9d1dc6e0bca2cfc03b82891d86e004fbf0fac0651419b78ce758f6d167e00-merged.mount: Deactivated successfully.
Dec 13 03:59:01 compute-0 podman[212830]: 2025-12-13 03:59:01.145296557 +0000 UTC m=+0.160611085 container remove c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_williamson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 03:59:01 compute-0 systemd[1]: libpod-conmon-c70dab214d679f8c66b639213c0eb680e61aeee381ed70a8c0d9f35ae2cde557.scope: Deactivated successfully.
Dec 13 03:59:01 compute-0 sudo[213007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jtxkaptpxuojopfgmxsmhqgujeendxig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598341.0222564-1385-78838407091769/AnsiballZ_stat.py'
Dec 13 03:59:01 compute-0 sudo[213007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:01 compute-0 podman[212975]: 2025-12-13 03:59:01.299145887 +0000 UTC m=+0.038737733 container create fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 03:59:01 compute-0 systemd[1]: Started libpod-conmon-fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81.scope.
Dec 13 03:59:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 03:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3829c61db77008aa13510a3de9eb31c778202d98d52912cb7ad185ef42b9fd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3829c61db77008aa13510a3de9eb31c778202d98d52912cb7ad185ef42b9fd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3829c61db77008aa13510a3de9eb31c778202d98d52912cb7ad185ef42b9fd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3829c61db77008aa13510a3de9eb31c778202d98d52912cb7ad185ef42b9fd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 03:59:01 compute-0 podman[212975]: 2025-12-13 03:59:01.283662037 +0000 UTC m=+0.023253893 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 03:59:01 compute-0 podman[212975]: 2025-12-13 03:59:01.383384836 +0000 UTC m=+0.122976702 container init fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 03:59:01 compute-0 podman[212975]: 2025-12-13 03:59:01.391450265 +0000 UTC m=+0.131042111 container start fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 03:59:01 compute-0 podman[212975]: 2025-12-13 03:59:01.39456181 +0000 UTC m=+0.134153656 container attach fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 03:59:01 compute-0 python3.9[213011]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:01 compute-0 sudo[213007]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:01 compute-0 sudo[213166]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbitaldfomipgcqwdfqujsywcvqrbjru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598341.0222564-1385-78838407091769/AnsiballZ_copy.py'
Dec 13 03:59:01 compute-0 sudo[213166]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:01 compute-0 podman[213173]: 2025-12-13 03:59:01.916122272 +0000 UTC m=+0.093560903 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible)
Dec 13 03:59:02 compute-0 python3.9[213176]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598341.0222564-1385-78838407091769/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:02 compute-0 sudo[213166]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:02 compute-0 lvm[213253]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 03:59:02 compute-0 lvm[213256]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 03:59:02 compute-0 lvm[213256]: VG ceph_vg1 finished
Dec 13 03:59:02 compute-0 lvm[213253]: VG ceph_vg0 finished
Dec 13 03:59:02 compute-0 lvm[213269]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 03:59:02 compute-0 lvm[213269]: VG ceph_vg2 finished
Dec 13 03:59:02 compute-0 priceless_lehmann[213015]: {}
Dec 13 03:59:02 compute-0 ceph-mon[75071]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:02 compute-0 systemd[1]: libpod-fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81.scope: Deactivated successfully.
Dec 13 03:59:02 compute-0 podman[212975]: 2025-12-13 03:59:02.238489322 +0000 UTC m=+0.978081168 container died fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 03:59:02 compute-0 systemd[1]: libpod-fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81.scope: Consumed 1.337s CPU time.
Dec 13 03:59:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-e3829c61db77008aa13510a3de9eb31c778202d98d52912cb7ad185ef42b9fd5-merged.mount: Deactivated successfully.
Dec 13 03:59:02 compute-0 podman[212975]: 2025-12-13 03:59:02.286506787 +0000 UTC m=+1.026098633 container remove fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 03:59:02 compute-0 systemd[1]: libpod-conmon-fa2178d1a2737ffc5f422e845e7c91f32a862f2ebf4dec4afcc7b0ec8d621f81.scope: Deactivated successfully.
Dec 13 03:59:02 compute-0 sudo[212769]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 03:59:02 compute-0 sudo[213409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghuhricdkijrxpnlptcjxggxwbccaoar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598342.2045352-1400-192674703091480/AnsiballZ_stat.py'
Dec 13 03:59:02 compute-0 sudo[213409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:59:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 03:59:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:59:02 compute-0 python3.9[213411]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:02 compute-0 sudo[213412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 03:59:02 compute-0 sudo[213412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 03:59:02 compute-0 sudo[213412]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:02 compute-0 sudo[213409]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:02 compute-0 sudo[213557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhgquusfcngiqkhwbhvdhgjkfsrfmbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598342.2045352-1400-192674703091480/AnsiballZ_copy.py'
Dec 13 03:59:03 compute-0 sudo[213557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:03 compute-0 python3.9[213559]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598342.2045352-1400-192674703091480/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:03 compute-0 sudo[213557]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:59:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 03:59:03 compute-0 ceph-mon[75071]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:03 compute-0 sudo[213709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfzkvbycehrewwojiyapfnahhnkydyld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598343.3730757-1415-204078528981038/AnsiballZ_stat.py'
Dec 13 03:59:03 compute-0 sudo[213709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:03 compute-0 python3.9[213711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:03 compute-0 sudo[213709]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:04 compute-0 sudo[213832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkaxciymrhotcbkftlmwjezgqoahhlrp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598343.3730757-1415-204078528981038/AnsiballZ_copy.py'
Dec 13 03:59:04 compute-0 sudo[213832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:04 compute-0 python3.9[213834]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598343.3730757-1415-204078528981038/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:04 compute-0 sudo[213832]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:04 compute-0 sudo[213984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukljkuzyzyrtvngtokryephjqcvgzcln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598344.5346146-1430-26560066593758/AnsiballZ_systemd.py'
Dec 13 03:59:04 compute-0 sudo[213984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:05 compute-0 python3.9[213986]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:59:05 compute-0 systemd[1]: Reloading.
Dec 13 03:59:05 compute-0 systemd-sysv-generator[214014]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:59:05 compute-0 systemd-rc-local-generator[214009]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:59:05 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Dec 13 03:59:05 compute-0 sudo[213984]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:05 compute-0 sudo[214175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypavbioemqielbipcnmdysbnmdxydpfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598345.6649873-1438-255574606709270/AnsiballZ_systemd.py'
Dec 13 03:59:05 compute-0 sudo[214175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:06 compute-0 ceph-mon[75071]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:06 compute-0 python3.9[214177]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec 13 03:59:06 compute-0 systemd[1]: Reloading.
Dec 13 03:59:06 compute-0 systemd-sysv-generator[214206]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:59:06 compute-0 systemd-rc-local-generator[214199]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:59:06 compute-0 systemd[1]: Reloading.
Dec 13 03:59:06 compute-0 systemd-rc-local-generator[214234]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:59:06 compute-0 systemd-sysv-generator[214240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:59:06 compute-0 sudo[214175]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:07 compute-0 sshd-session[155517]: Connection closed by 192.168.122.30 port 52608
Dec 13 03:59:07 compute-0 sshd-session[155505]: pam_unix(sshd:session): session closed for user zuul
Dec 13 03:59:07 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Dec 13 03:59:07 compute-0 systemd[1]: session-49.scope: Consumed 3min 34.409s CPU time.
Dec 13 03:59:07 compute-0 systemd-logind[796]: Session 49 logged out. Waiting for processes to exit.
Dec 13 03:59:07 compute-0 systemd-logind[796]: Removed session 49.
Dec 13 03:59:08 compute-0 ceph-mon[75071]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:08 compute-0 podman[214273]: 2025-12-13 03:59:08.940820319 +0000 UTC m=+0.080344785 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 03:59:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:10 compute-0 ceph-mon[75071]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:12 compute-0 ceph-mon[75071]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:12 compute-0 sshd-session[214293]: Accepted publickey for zuul from 192.168.122.30 port 50234 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 03:59:12 compute-0 systemd-logind[796]: New session 50 of user zuul.
Dec 13 03:59:12 compute-0 systemd[1]: Started Session 50 of User zuul.
Dec 13 03:59:12 compute-0 sshd-session[214293]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 03:59:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:13 compute-0 python3.9[214446]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 03:59:14 compute-0 ceph-mon[75071]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:14 compute-0 python3.9[214600]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:59:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:15 compute-0 network[214617]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:59:15 compute-0 network[214618]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:59:15 compute-0 network[214619]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:59:16 compute-0 ceph-mon[75071]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:18 compute-0 ceph-mon[75071]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:18 compute-0 sudo[214889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjdbjmcaposwsqnmktehgusuasahtzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598358.3375225-47-122710012303434/AnsiballZ_setup.py'
Dec 13 03:59:18 compute-0 sudo[214889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:18 compute-0 python3.9[214891]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec 13 03:59:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:19 compute-0 sudo[214889]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:19 compute-0 sudo[214973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvbfviiptgshpkimsuasjcmogwuwgkfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598358.3375225-47-122710012303434/AnsiballZ_dnf.py'
Dec 13 03:59:19 compute-0 sudo[214973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:19 compute-0 python3.9[214975]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 03:59:20 compute-0 ceph-mon[75071]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:22 compute-0 ceph-mon[75071]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:24 compute-0 ceph-mon[75071]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:25 compute-0 sudo[214973]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:26 compute-0 sudo[215126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydhzbyuhyggiofujpmynbpfojhtgkvio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598365.7455132-59-213885321896837/AnsiballZ_stat.py'
Dec 13 03:59:26 compute-0 sudo[215126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:26 compute-0 ceph-mon[75071]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:26 compute-0 python3.9[215128]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:59:26 compute-0 sudo[215126]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:26 compute-0 sudo[215278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcuxmlfyknzcpegjyjhevdetsxqdxyzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598366.5722804-69-50804046752748/AnsiballZ_command.py'
Dec 13 03:59:26 compute-0 sudo[215278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:27 compute-0 python3.9[215280]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:59:27 compute-0 sudo[215278]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:27 compute-0 sudo[215431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfwonsnlfwkpwzxayztjxgsbjjbjabtu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598367.4930394-79-4650194337643/AnsiballZ_stat.py'
Dec 13 03:59:27 compute-0 sudo[215431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:28 compute-0 python3.9[215433]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:59:28 compute-0 sudo[215431]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:28 compute-0 ceph-mon[75071]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:28 compute-0 sudo[215583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvrghwokktdlipbwzitkihollubyrca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598368.199119-87-137454177069692/AnsiballZ_command.py'
Dec 13 03:59:28 compute-0 sudo[215583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:28 compute-0 python3.9[215585]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:59:28 compute-0 sudo[215583]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:29 compute-0 sudo[215736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahwhpjcppobmhrxfdndhtfrcfipwgjvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598368.8918645-95-26931817813380/AnsiballZ_stat.py'
Dec 13 03:59:29 compute-0 sudo[215736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:29 compute-0 python3.9[215738]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:29 compute-0 sudo[215736]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:29 compute-0 sudo[215859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpvqweztxeojllfeyknyljkjszfigdpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598368.8918645-95-26931817813380/AnsiballZ_copy.py'
Dec 13 03:59:29 compute-0 sudo[215859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:30 compute-0 python3.9[215861]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598368.8918645-95-26931817813380/.source.iscsi _original_basename=.tyzf9g4s follow=False checksum=78bc1f6b4291b42ec5a01f67e18fb063cf3118cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:30 compute-0 sudo[215859]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:30 compute-0 ceph-mon[75071]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:30 compute-0 sudo[216011]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwlhupjrrykdvbtbkmzdodxmvdeevcqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598370.2966034-110-93691578110109/AnsiballZ_file.py'
Dec 13 03:59:30 compute-0 sudo[216011]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:30 compute-0 python3.9[216013]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:30 compute-0 sudo[216011]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:31 compute-0 sudo[216163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uupbcrmlaxmyvddqjuqurwonmlucnkvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598371.1530595-118-199108798563475/AnsiballZ_lineinfile.py'
Dec 13 03:59:31 compute-0 sudo[216163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:31 compute-0 python3.9[216165]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:31 compute-0 sudo[216163]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:32 compute-0 ceph-mon[75071]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:32 compute-0 sudo[216328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsxpmkozlczrfnjdngqmwoxxjzwxynjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598372.028338-127-57173108448908/AnsiballZ_systemd_service.py'
Dec 13 03:59:32 compute-0 sudo[216328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:32 compute-0 podman[216289]: 2025-12-13 03:59:32.71472437 +0000 UTC m=+0.148992046 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 03:59:32 compute-0 python3.9[216334]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:59:32 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec 13 03:59:33 compute-0 sudo[216328]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:33 compute-0 sudo[216497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngebkkcaomcntpqxoyicauraliooqkcd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598373.1805654-135-125941747974396/AnsiballZ_systemd_service.py'
Dec 13 03:59:33 compute-0 sudo[216497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:33 compute-0 python3.9[216499]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:59:33 compute-0 systemd[1]: Reloading.
Dec 13 03:59:33 compute-0 systemd-rc-local-generator[216527]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:59:33 compute-0 systemd-sysv-generator[216531]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:59:34 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 03:59:34 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 13 03:59:34 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Dec 13 03:59:34 compute-0 systemd[1]: Started Open-iSCSI.
Dec 13 03:59:34 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec 13 03:59:34 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec 13 03:59:34 compute-0 sudo[216497]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:34 compute-0 ceph-mon[75071]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:34 compute-0 sudo[216697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnplejkpdazuzvehpdxgaryomjsflwou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598374.6152318-146-250996310245893/AnsiballZ_service_facts.py'
Dec 13 03:59:34 compute-0 sudo[216697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:35 compute-0 python3.9[216699]: ansible-ansible.builtin.service_facts Invoked
Dec 13 03:59:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:59:35.071 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 03:59:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:59:35.073 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 03:59:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 03:59:35.073 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 03:59:35 compute-0 network[216716]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 03:59:35 compute-0 network[216717]: 'network-scripts' will be removed from distribution in near future.
Dec 13 03:59:35 compute-0 network[216718]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 03:59:36 compute-0 ceph-mon[75071]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:37 compute-0 ceph-mon[75071]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:38 compute-0 sudo[216697]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:38 compute-0 sudo[216988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbtimgrdiesshykbxuqhkjvofhmfifyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598378.4756942-156-259473601017581/AnsiballZ_file.py'
Dec 13 03:59:38 compute-0 sudo[216988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:38 compute-0 python3.9[216990]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 03:59:38 compute-0 sudo[216988]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:39 compute-0 sudo[217150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twrnshcrexbtpfvkyyijklvlmnbztqhs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598379.1255345-164-13927876559953/AnsiballZ_modprobe.py'
Dec 13 03:59:39 compute-0 sudo[217150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:39 compute-0 podman[217114]: 2025-12-13 03:59:39.580665741 +0000 UTC m=+0.082377962 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 03:59:39 compute-0 python3.9[217156]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec 13 03:59:39 compute-0 sudo[217150]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:40 compute-0 ceph-mon[75071]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:40 compute-0 sudo[217315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbqubttrzwflnjwurxcywxzzxsrzosgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598379.946917-172-277768527500096/AnsiballZ_stat.py'
Dec 13 03:59:40 compute-0 sudo[217315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:40 compute-0 python3.9[217317]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:40 compute-0 sudo[217315]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_03:59:40
Dec 13 03:59:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 03:59:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 03:59:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'volumes', 'images', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.rgw.root', '.mgr']
Dec 13 03:59:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 03:59:40 compute-0 sudo[217438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wijlebxbvfvpxopolnwcsuxmpqqtdmnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598379.946917-172-277768527500096/AnsiballZ_copy.py'
Dec 13 03:59:40 compute-0 sudo[217438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:40 compute-0 python3.9[217440]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598379.946917-172-277768527500096/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:40 compute-0 sudo[217438]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:41 compute-0 sudo[217590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnwipdrrftpytlwehzopbxwutuwfcixk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598381.2211134-188-27827846613843/AnsiballZ_lineinfile.py'
Dec 13 03:59:41 compute-0 sudo[217590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:41 compute-0 python3.9[217592]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:41 compute-0 sudo[217590]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:42 compute-0 ceph-mon[75071]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 03:59:42 compute-0 sudo[217742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcoyeffkmlinlegwnstbszifbamnpcsk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598381.835544-196-212223964766346/AnsiballZ_systemd.py'
Dec 13 03:59:42 compute-0 sudo[217742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:59:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 03:59:42 compute-0 python3.9[217744]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 03:59:42 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 03:59:42 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 13 03:59:42 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 13 03:59:42 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 03:59:42 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 03:59:42 compute-0 sudo[217742]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:43 compute-0 sudo[217898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkqpnvptcetsnugacmisrkyrieghycki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598382.9882114-204-22259517980741/AnsiballZ_file.py'
Dec 13 03:59:43 compute-0 sudo[217898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:43 compute-0 python3.9[217900]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:59:43 compute-0 sudo[217898]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:43 compute-0 sudo[218050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ulsuhdyeninwovgnkvqaoxnjpjrxvrfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598383.637864-213-189958989031186/AnsiballZ_stat.py'
Dec 13 03:59:43 compute-0 sudo[218050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:44 compute-0 python3.9[218052]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:59:44 compute-0 sudo[218050]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:44 compute-0 ceph-mon[75071]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:44 compute-0 sudo[218202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vysdygehxwgudbyufnzudvdlzkeeukpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598384.3011389-222-143901199163882/AnsiballZ_stat.py'
Dec 13 03:59:44 compute-0 sudo[218202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:44 compute-0 python3.9[218204]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:59:44 compute-0 sudo[218202]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:45 compute-0 sudo[218354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjzzcmtsqiardbycozhhyvwujxdapyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598384.9465702-230-75547911715698/AnsiballZ_stat.py'
Dec 13 03:59:45 compute-0 sudo[218354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:45 compute-0 python3.9[218356]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:45 compute-0 sudo[218354]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:45 compute-0 sudo[218477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luvcyaqfakrbcxzsknlbqambmsqcmuxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598384.9465702-230-75547911715698/AnsiballZ_copy.py'
Dec 13 03:59:45 compute-0 sudo[218477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:45 compute-0 python3.9[218479]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598384.9465702-230-75547911715698/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:45 compute-0 sudo[218477]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:46 compute-0 ceph-mon[75071]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:46 compute-0 sudo[218629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzqlyylnaurswgtrkhbqubtkykxzehmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598386.0614643-245-223941735398318/AnsiballZ_command.py'
Dec 13 03:59:46 compute-0 sudo[218629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:46 compute-0 python3.9[218631]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 03:59:46 compute-0 sudo[218629]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:47 compute-0 sudo[218782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujdtqashlcyhbpdxkeyqwimcfsrccnit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598386.754292-253-159032592509153/AnsiballZ_lineinfile.py'
Dec 13 03:59:47 compute-0 sudo[218782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:47 compute-0 python3.9[218784]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:47 compute-0 sudo[218782]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:47 compute-0 sudo[218934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvmloimngzwheoigybainzcazoovhgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598387.4272838-261-127107132893553/AnsiballZ_replace.py'
Dec 13 03:59:47 compute-0 sudo[218934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:48 compute-0 python3.9[218936]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:48 compute-0 ceph-mon[75071]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:48 compute-0 sudo[218934]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:48 compute-0 sudo[219086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvqrpyrjtiecdcsftshbhctiinvixxru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598388.257905-269-97369078241958/AnsiballZ_replace.py'
Dec 13 03:59:48 compute-0 sudo[219086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:48 compute-0 python3.9[219088]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:48 compute-0 sudo[219086]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:49 compute-0 sudo[219238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uswadbjijjmwowottnesboqkxbokokkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598388.8874514-278-240591094866908/AnsiballZ_lineinfile.py'
Dec 13 03:59:49 compute-0 sudo[219238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:49 compute-0 python3.9[219240]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:49 compute-0 sudo[219238]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:49 compute-0 sudo[219390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lemzmrmsxjymhfpsqdopphbduysegaij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598389.5048354-278-49684478077160/AnsiballZ_lineinfile.py'
Dec 13 03:59:49 compute-0 sudo[219390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:49 compute-0 python3.9[219392]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:49 compute-0 sudo[219390]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:50 compute-0 ceph-mon[75071]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:50 compute-0 sudo[219542]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fijhxhcorqwgzztgqxlutuwkdzzyirrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598390.047834-278-151679680636554/AnsiballZ_lineinfile.py'
Dec 13 03:59:50 compute-0 sudo[219542]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:50 compute-0 python3.9[219544]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:50 compute-0 sudo[219542]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:50 compute-0 sudo[219694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvzgkxgmfyobyrhfukxmutihsjmurkyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598390.6515927-278-242847404040880/AnsiballZ_lineinfile.py'
Dec 13 03:59:50 compute-0 sudo[219694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:51 compute-0 python3.9[219696]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:51 compute-0 sudo[219694]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:51 compute-0 sudo[219846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkmbgrjkrlhsdeswckjalphpquoflxqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598391.3128464-307-20254570094550/AnsiballZ_stat.py'
Dec 13 03:59:51 compute-0 sudo[219846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:51 compute-0 python3.9[219848]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 03:59:51 compute-0 sudo[219846]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:52 compute-0 ceph-mon[75071]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 03:59:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 03:59:52 compute-0 sudo[220000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjxzyuzizwdnsrengabdwknnvckozylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598391.9866915-315-277939567182154/AnsiballZ_file.py'
Dec 13 03:59:52 compute-0 sudo[220000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:52 compute-0 python3.9[220002]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:52 compute-0 sudo[220000]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:52 compute-0 sudo[220152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wndhyjorcfrtqbnrltsmupyfldfvswdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598392.7174816-324-241923648114318/AnsiballZ_file.py'
Dec 13 03:59:52 compute-0 sudo[220152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:53 compute-0 python3.9[220154]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:59:53 compute-0 sudo[220152]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:53 compute-0 sudo[220304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cmorthnvtttyiwzmyxuxpiccxcoinjot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598393.3559127-332-264002718463385/AnsiballZ_stat.py'
Dec 13 03:59:53 compute-0 sudo[220304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:53 compute-0 python3.9[220306]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:53 compute-0 sudo[220304]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:54 compute-0 sudo[220382]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztfqfthvfgbjnqzzcuviqpbyldfytznq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598393.3559127-332-264002718463385/AnsiballZ_file.py'
Dec 13 03:59:54 compute-0 sudo[220382]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:54 compute-0 ceph-mon[75071]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:54 compute-0 python3.9[220384]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:59:54 compute-0 sudo[220382]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:54 compute-0 sudo[220534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qahlhphiuixwgfuchodfroyykyghtcnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598394.3993828-332-245673386804808/AnsiballZ_stat.py'
Dec 13 03:59:54 compute-0 sudo[220534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:54 compute-0 python3.9[220536]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 03:59:54 compute-0 sudo[220534]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:55 compute-0 sudo[220612]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqgbunaddermnkmnebrjmzcpvkjhpbww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598394.3993828-332-245673386804808/AnsiballZ_file.py'
Dec 13 03:59:55 compute-0 sudo[220612]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:55 compute-0 python3.9[220614]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 03:59:55 compute-0 sudo[220612]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:55 compute-0 sudo[220764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdpjwuvqbqxzldqtvoichxbtmbzfgoem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598395.4944372-355-117212395293661/AnsiballZ_file.py'
Dec 13 03:59:55 compute-0 sudo[220764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:55 compute-0 python3.9[220766]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:56 compute-0 sudo[220764]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:56 compute-0 ceph-mon[75071]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:56 compute-0 sudo[220916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycngrqvrrzrungyrrialxowwkzjjadcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598396.1968923-363-265940065528635/AnsiballZ_stat.py'
Dec 13 03:59:56 compute-0 sudo[220916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:56 compute-0 python3.9[220918]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:56 compute-0 sudo[220916]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:56 compute-0 sudo[220994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmjfrmovsooiqadzsppeuytdubulkaql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598396.1968923-363-265940065528635/AnsiballZ_file.py'
Dec 13 03:59:56 compute-0 sudo[220994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:57 compute-0 python3.9[220996]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:57 compute-0 sudo[220994]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:57 compute-0 sudo[221146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzvjalnblkizxrpjsvityqdbouwazqon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598397.2785387-375-221469788148338/AnsiballZ_stat.py'
Dec 13 03:59:57 compute-0 sudo[221146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:57 compute-0 python3.9[221148]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 03:59:57 compute-0 sudo[221146]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:57 compute-0 sudo[221224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gykltvmupuwodeyypedbhznwdudsfikx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598397.2785387-375-221469788148338/AnsiballZ_file.py'
Dec 13 03:59:57 compute-0 sudo[221224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:58 compute-0 ceph-mon[75071]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:58 compute-0 python3.9[221226]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 03:59:58 compute-0 sudo[221224]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:58 compute-0 sudo[221376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyrqxxjukuqrbhivavysvlohybncqwmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598398.3606348-387-227927185790124/AnsiballZ_systemd.py'
Dec 13 03:59:58 compute-0 sudo[221376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:58 compute-0 python3.9[221378]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 03:59:59 compute-0 systemd[1]: Reloading.
Dec 13 03:59:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 03:59:59 compute-0 systemd-rc-local-generator[221402]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 03:59:59 compute-0 systemd-sysv-generator[221407]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 03:59:59 compute-0 sudo[221376]: pam_unix(sudo:session): session closed for user root
Dec 13 03:59:59 compute-0 sudo[221566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrnsukqyrxdnadsyulldrnfprwfncpuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598399.5504184-395-272413279465488/AnsiballZ_stat.py'
Dec 13 03:59:59 compute-0 sudo[221566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 03:59:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:00 compute-0 python3.9[221568]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:00:00 compute-0 sudo[221566]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:00 compute-0 ceph-mon[75071]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:00 compute-0 sudo[221644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aadrrmzirgvztuxxuieawybwrgqwegyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598399.5504184-395-272413279465488/AnsiballZ_file.py'
Dec 13 04:00:00 compute-0 sudo[221644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:00 compute-0 python3.9[221646]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:00 compute-0 sudo[221644]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:01 compute-0 sudo[221796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqvlrasmhtvofwvkgjkjcvefiyfcritp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598400.9880471-407-215607255503092/AnsiballZ_stat.py'
Dec 13 04:00:01 compute-0 sudo[221796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:01 compute-0 python3.9[221798]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:00:01 compute-0 sudo[221796]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:01 compute-0 ceph-mon[75071]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:01 compute-0 sudo[221874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnjqpcenrgqpjnxynjayavfsurpntyfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598400.9880471-407-215607255503092/AnsiballZ_file.py'
Dec 13 04:00:01 compute-0 sudo[221874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:01 compute-0 python3.9[221876]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:01 compute-0 sudo[221874]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:02 compute-0 sudo[222026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggjmaxehwfpksydporqhzbjaxhokywmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598402.0190048-419-184802280115183/AnsiballZ_systemd.py'
Dec 13 04:00:02 compute-0 sudo[222026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:02 compute-0 python3.9[222028]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:02 compute-0 systemd[1]: Reloading.
Dec 13 04:00:02 compute-0 systemd-rc-local-generator[222062]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:02 compute-0 systemd-sysv-generator[222067]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:02 compute-0 sudo[222031]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:00:02 compute-0 sudo[222031]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:02 compute-0 sudo[222031]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:03 compute-0 systemd[1]: Starting Create netns directory...
Dec 13 04:00:03 compute-0 sudo[222098]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:00:03 compute-0 sudo[222098]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:03 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec 13 04:00:03 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec 13 04:00:03 compute-0 systemd[1]: Finished Create netns directory.
Dec 13 04:00:03 compute-0 podman[222090]: 2025-12-13 04:00:03.046198983 +0000 UTC m=+0.086823684 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:00:03 compute-0 sudo[222026]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:04 compute-0 sudo[222098]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:00:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:00:04 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:00:04 compute-0 sudo[222300]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:00:04 compute-0 sudo[222300]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:04 compute-0 sudo[222349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlanivvxqmbjwmirmsichqcmwnfpnqgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598403.9043508-429-169043049025844/AnsiballZ_file.py'
Dec 13 04:00:04 compute-0 sudo[222349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:04 compute-0 sudo[222300]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:04 compute-0 sudo[222354]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:00:04 compute-0 sudo[222354]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:04 compute-0 python3.9[222353]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:00:04 compute-0 sudo[222349]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.489374057 +0000 UTC m=+0.042989750 container create efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:00:04 compute-0 systemd[1]: Started libpod-conmon-efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af.scope.
Dec 13 04:00:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.469078205 +0000 UTC m=+0.022693908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.569476668 +0000 UTC m=+0.123092371 container init efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.580300071 +0000 UTC m=+0.133915754 container start efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.583864089 +0000 UTC m=+0.137479792 container attach efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:00:04 compute-0 bold_bhaskara[222430]: 167 167
Dec 13 04:00:04 compute-0 systemd[1]: libpod-efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af.scope: Deactivated successfully.
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.588859414 +0000 UTC m=+0.142475097 container died efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-1b7d1ba5e88d71a86ed4d1b9e063475b9610dc0af418f2701cfb2552fd84d1a9-merged.mount: Deactivated successfully.
Dec 13 04:00:04 compute-0 podman[222394]: 2025-12-13 04:00:04.648610771 +0000 UTC m=+0.202226454 container remove efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:04 compute-0 systemd[1]: libpod-conmon-efa2a89cbaca1369e68863f5cf472722f6420f036a4393ed47c3693c16e7f2af.scope: Deactivated successfully.
Dec 13 04:00:04 compute-0 podman[222552]: 2025-12-13 04:00:04.799147728 +0000 UTC m=+0.039672451 container create 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:04 compute-0 sudo[222592]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujeutxutameqteyaoajwbnzimeuabbzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598404.5619078-437-37689307869180/AnsiballZ_stat.py'
Dec 13 04:00:04 compute-0 sudo[222592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:04 compute-0 systemd[1]: Started libpod-conmon-1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26.scope.
Dec 13 04:00:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:04 compute-0 podman[222552]: 2025-12-13 04:00:04.781472766 +0000 UTC m=+0.021997519 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:04 compute-0 podman[222552]: 2025-12-13 04:00:04.899987922 +0000 UTC m=+0.140512685 container init 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:04 compute-0 podman[222552]: 2025-12-13 04:00:04.90763912 +0000 UTC m=+0.148163853 container start 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:04 compute-0 podman[222552]: 2025-12-13 04:00:04.912675947 +0000 UTC m=+0.153200680 container attach 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:00:04 compute-0 python3.9[222594]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:00:05 compute-0 sudo[222592]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:05 compute-0 sudo[222735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtndbzdsfmplnelzxxfrkjjyjgvogkyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598404.5619078-437-37689307869180/AnsiballZ_copy.py'
Dec 13 04:00:05 compute-0 sudo[222735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:05 compute-0 lucid_mahavira[222598]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:00:05 compute-0 lucid_mahavira[222598]: --> All data devices are unavailable
Dec 13 04:00:05 compute-0 systemd[1]: libpod-1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26.scope: Deactivated successfully.
Dec 13 04:00:05 compute-0 podman[222552]: 2025-12-13 04:00:05.437413397 +0000 UTC m=+0.677938130 container died 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:00:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d41d3821ac56b6cefff300c049a625a847cc8795c1ae00ecc30c185abe00839-merged.mount: Deactivated successfully.
Dec 13 04:00:05 compute-0 podman[222552]: 2025-12-13 04:00:05.479850643 +0000 UTC m=+0.720375376 container remove 1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_mahavira, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:05 compute-0 systemd[1]: libpod-conmon-1f8c7d224c9f4fc43cddf6daba7e2d2ebb740947a30293de1bd4b7f0cea66a26.scope: Deactivated successfully.
Dec 13 04:00:05 compute-0 sudo[222354]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:05 compute-0 python3.9[222737]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598404.5619078-437-37689307869180/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:00:05 compute-0 sudo[222735]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:05 compute-0 sudo[222754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:00:05 compute-0 sudo[222754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:05 compute-0 sudo[222754]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:05 compute-0 sudo[222791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:00:05 compute-0 sudo[222791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:05 compute-0 podman[222839]: 2025-12-13 04:00:05.968539872 +0000 UTC m=+0.049191130 container create d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:06 compute-0 systemd[1]: Started libpod-conmon-d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185.scope.
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:05.949058922 +0000 UTC m=+0.029710200 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:06.067385561 +0000 UTC m=+0.148036819 container init d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:06.080842138 +0000 UTC m=+0.161493406 container start d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:06.084821027 +0000 UTC m=+0.165472285 container attach d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:00:06 compute-0 vigorous_wilbur[222896]: 167 167
Dec 13 04:00:06 compute-0 systemd[1]: libpod-d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185.scope: Deactivated successfully.
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:06.087699305 +0000 UTC m=+0.168350573 container died d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d1c57008689dcfc83970375f63730e563d86ac17e631be96389980eacf4ee43-merged.mount: Deactivated successfully.
Dec 13 04:00:06 compute-0 podman[222839]: 2025-12-13 04:00:06.129354518 +0000 UTC m=+0.210005796 container remove d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_wilbur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:00:06 compute-0 systemd[1]: libpod-conmon-d0039825b7c4fff7eb59b6f660280af3e550dcc06ab19d5e1bf539e901b64185.scope: Deactivated successfully.
Dec 13 04:00:06 compute-0 ceph-mon[75071]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.313856809 +0000 UTC m=+0.049065736 container create 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Dec 13 04:00:06 compute-0 sudo[223018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwbbsgtvvowgbbrnzvjjtdfyqqilqovo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598405.9733245-454-213683016936723/AnsiballZ_file.py'
Dec 13 04:00:06 compute-0 sudo[223018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:06 compute-0 systemd[1]: Started libpod-conmon-5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4.scope.
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.293227328 +0000 UTC m=+0.028436255 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7818161b4ba2ea7565d051d1eb8e266c7a556bcbea733f670cdd992d91917d26/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7818161b4ba2ea7565d051d1eb8e266c7a556bcbea733f670cdd992d91917d26/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7818161b4ba2ea7565d051d1eb8e266c7a556bcbea733f670cdd992d91917d26/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7818161b4ba2ea7565d051d1eb8e266c7a556bcbea733f670cdd992d91917d26/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.427382999 +0000 UTC m=+0.162591956 container init 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.436118887 +0000 UTC m=+0.171327814 container start 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.440564167 +0000 UTC m=+0.175773154 container attach 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:00:06 compute-0 python3.9[223020]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:00:06 compute-0 sudo[223018]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:06 compute-0 romantic_ellis[223023]: {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     "0": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "devices": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "/dev/loop3"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             ],
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_name": "ceph_lv0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_size": "21470642176",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "name": "ceph_lv0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "tags": {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_name": "ceph",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.crush_device_class": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.encrypted": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.objectstore": "bluestore",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_id": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.vdo": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.with_tpm": "0"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             },
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "vg_name": "ceph_vg0"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         }
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     ],
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     "1": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "devices": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "/dev/loop4"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             ],
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_name": "ceph_lv1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_size": "21470642176",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "name": "ceph_lv1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "tags": {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_name": "ceph",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.crush_device_class": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.encrypted": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.objectstore": "bluestore",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_id": "1",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.vdo": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.with_tpm": "0"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             },
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "vg_name": "ceph_vg1"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         }
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     ],
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     "2": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "devices": [
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "/dev/loop5"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             ],
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_name": "ceph_lv2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_size": "21470642176",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "name": "ceph_lv2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "tags": {
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.cluster_name": "ceph",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.crush_device_class": "",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.encrypted": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.objectstore": "bluestore",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osd_id": "2",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.vdo": "0",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:                 "ceph.with_tpm": "0"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             },
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "type": "block",
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:             "vg_name": "ceph_vg2"
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:         }
Dec 13 04:00:06 compute-0 romantic_ellis[223023]:     ]
Dec 13 04:00:06 compute-0 romantic_ellis[223023]: }
Dec 13 04:00:06 compute-0 systemd[1]: libpod-5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4.scope: Deactivated successfully.
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.803741971 +0000 UTC m=+0.538950898 container died 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-7818161b4ba2ea7565d051d1eb8e266c7a556bcbea733f670cdd992d91917d26-merged.mount: Deactivated successfully.
Dec 13 04:00:06 compute-0 podman[222978]: 2025-12-13 04:00:06.855609703 +0000 UTC m=+0.590818630 container remove 5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:00:06 compute-0 systemd[1]: libpod-conmon-5a6cbe4783fd0a53e7aac8fc9d2342b9d0aeb2d4edf5e135cf19b61f6162edb4.scope: Deactivated successfully.
Dec 13 04:00:06 compute-0 sudo[222791]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:06 compute-0 sudo[223142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:00:07 compute-0 sudo[223142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:07 compute-0 sudo[223142]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:07 compute-0 sudo[223237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pplzmnwprfwximqwdgwwqtlqangxihlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598406.7872195-462-99041359673025/AnsiballZ_stat.py'
Dec 13 04:00:07 compute-0 sudo[223237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:07 compute-0 sudo[223197]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:00:07 compute-0 sudo[223197]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:07 compute-0 python3.9[223242]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:00:07 compute-0 sudo[223237]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.344933779 +0000 UTC m=+0.039251579 container create 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:00:07 compute-0 systemd[1]: Started libpod-conmon-59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2.scope.
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.327465864 +0000 UTC m=+0.021783684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.436004248 +0000 UTC m=+0.130322068 container init 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.443880152 +0000 UTC m=+0.138197952 container start 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:00:07 compute-0 exciting_dhawan[223312]: 167 167
Dec 13 04:00:07 compute-0 systemd[1]: libpod-59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2.scope: Deactivated successfully.
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.448452586 +0000 UTC m=+0.142770406 container attach 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.449513356 +0000 UTC m=+0.143831156 container died 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:00:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-42a704cfc70e3df119a659355e0cec6521049257ef6d290a8d9623b5885a1c5f-merged.mount: Deactivated successfully.
Dec 13 04:00:07 compute-0 podman[223256]: 2025-12-13 04:00:07.490992954 +0000 UTC m=+0.185310754 container remove 59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:00:07 compute-0 systemd[1]: libpod-conmon-59ec7f7ad9e01cbbe23669183a2974f43c9e99f3658ab72d7b50db3ac776c2f2.scope: Deactivated successfully.
Dec 13 04:00:07 compute-0 sudo[223410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upwntxkkvjwyhaalchrftchugeiyvlns ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598406.7872195-462-99041359673025/AnsiballZ_copy.py'
Dec 13 04:00:07 compute-0 sudo[223410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:07 compute-0 podman[223418]: 2025-12-13 04:00:07.670708695 +0000 UTC m=+0.048329387 container create c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:00:07 compute-0 systemd[1]: Started libpod-conmon-c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f.scope.
Dec 13 04:00:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538e343d4544a33c71a06c4e1c9783029402419a51720919a237b7d562ae6a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538e343d4544a33c71a06c4e1c9783029402419a51720919a237b7d562ae6a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538e343d4544a33c71a06c4e1c9783029402419a51720919a237b7d562ae6a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0538e343d4544a33c71a06c4e1c9783029402419a51720919a237b7d562ae6a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:07 compute-0 podman[223418]: 2025-12-13 04:00:07.649851367 +0000 UTC m=+0.027472049 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:00:07 compute-0 podman[223418]: 2025-12-13 04:00:07.760677494 +0000 UTC m=+0.138298196 container init c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:00:07 compute-0 podman[223418]: 2025-12-13 04:00:07.768942999 +0000 UTC m=+0.146563671 container start c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:00:07 compute-0 podman[223418]: 2025-12-13 04:00:07.772515476 +0000 UTC m=+0.150136208 container attach c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:00:07 compute-0 python3.9[223415]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598406.7872195-462-99041359673025/.source.json _original_basename=.l63tj4qd follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:07 compute-0 sudo[223410]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:08 compute-0 ceph-mon[75071]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:08 compute-0 sudo[223634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaxcqsrcxytwjxlzggvmveousvvfalni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598407.977595-477-137351059275345/AnsiballZ_file.py'
Dec 13 04:00:08 compute-0 sudo[223634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:08 compute-0 python3.9[223640]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:08 compute-0 lvm[223665]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:00:08 compute-0 lvm[223665]: VG ceph_vg1 finished
Dec 13 04:00:08 compute-0 lvm[223664]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:00:08 compute-0 lvm[223664]: VG ceph_vg0 finished
Dec 13 04:00:08 compute-0 sudo[223634]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:08 compute-0 lvm[223667]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:00:08 compute-0 lvm[223667]: VG ceph_vg2 finished
Dec 13 04:00:08 compute-0 condescending_williamson[223434]: {}
Dec 13 04:00:08 compute-0 systemd[1]: libpod-c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f.scope: Deactivated successfully.
Dec 13 04:00:08 compute-0 systemd[1]: libpod-c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f.scope: Consumed 1.301s CPU time.
Dec 13 04:00:08 compute-0 podman[223418]: 2025-12-13 04:00:08.600087577 +0000 UTC m=+0.977708259 container died c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:00:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-0538e343d4544a33c71a06c4e1c9783029402419a51720919a237b7d562ae6a1-merged.mount: Deactivated successfully.
Dec 13 04:00:08 compute-0 podman[223418]: 2025-12-13 04:00:08.647026865 +0000 UTC m=+1.024647547 container remove c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:00:08 compute-0 systemd[1]: libpod-conmon-c691c3b7cb01691ab971d1557511884833852b129e89e58e578a9a546e83856f.scope: Deactivated successfully.
Dec 13 04:00:08 compute-0 sudo[223197]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:00:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:00:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:08 compute-0 sudo[223757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:00:08 compute-0 sudo[223757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:00:08 compute-0 sudo[223757]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:08 compute-0 sudo[223855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tchiugtpjttnuednutrsrqkmbiarerqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598408.6443655-485-120837163334459/AnsiballZ_stat.py'
Dec 13 04:00:08 compute-0 sudo[223855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:09 compute-0 sudo[223855]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:09 compute-0 sudo[223978]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdvkikwrmbeidtrtcratchbfdzgowutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598408.6443655-485-120837163334459/AnsiballZ_copy.py'
Dec 13 04:00:09 compute-0 sudo[223978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:00:09 compute-0 ceph-mon[75071]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:09 compute-0 sudo[223978]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:09 compute-0 podman[224005]: 2025-12-13 04:00:09.935744476 +0000 UTC m=+0.071849505 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:00:10 compute-0 sudo[224150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtaesxlybyuucxnyotzzyxxglzuyqvzz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598410.0292795-502-213245681190347/AnsiballZ_container_config_data.py'
Dec 13 04:00:10 compute-0 sudo[224150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:10 compute-0 python3.9[224152]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec 13 04:00:10 compute-0 sudo[224150]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:11 compute-0 sudo[224302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgjdfaiioijlbjrgrwxsporxneuuvsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598411.009218-511-227492379816176/AnsiballZ_container_config_hash.py'
Dec 13 04:00:11 compute-0 sudo[224302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:11 compute-0 python3.9[224304]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 04:00:11 compute-0 sudo[224302]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:12 compute-0 ceph-mon[75071]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:12 compute-0 sudo[224454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erfjbujpfnwpjixmmmxulgqcreebeghr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598411.9217122-520-211757739434150/AnsiballZ_podman_container_info.py'
Dec 13 04:00:12 compute-0 sudo[224454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:12 compute-0 python3.9[224456]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec 13 04:00:12 compute-0 sudo[224454]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:13 compute-0 sudo[224632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qksznfqjslibhtislqzkfkyehabpbjgh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765598413.4299448-533-204555770026562/AnsiballZ_edpm_container_manage.py'
Dec 13 04:00:13 compute-0 sudo[224632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:14 compute-0 ceph-mon[75071]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:14 compute-0 python3[224634]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 04:00:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:15 compute-0 podman[224647]: 2025-12-13 04:00:15.48408121 +0000 UTC m=+1.179483809 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 13 04:00:15 compute-0 podman[224705]: 2025-12-13 04:00:15.602987885 +0000 UTC m=+0.041296035 container create b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:00:15 compute-0 podman[224705]: 2025-12-13 04:00:15.581693396 +0000 UTC m=+0.020001566 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 13 04:00:15 compute-0 python3[224634]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec 13 04:00:15 compute-0 sudo[224632]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:16 compute-0 sudo[224893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htrifplskyymjgxotxbmtcbfxgscsgif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598415.8806086-541-234271373454042/AnsiballZ_stat.py'
Dec 13 04:00:16 compute-0 sudo[224893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:16 compute-0 ceph-mon[75071]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:16 compute-0 python3.9[224895]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:00:16 compute-0 sudo[224893]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:16 compute-0 sudo[225047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyvhiyjjwrwhwjxhkxbinrpibmajiqhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598416.5410795-550-165071568717082/AnsiballZ_file.py'
Dec 13 04:00:16 compute-0 sudo[225047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:16 compute-0 python3.9[225049]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:16 compute-0 sudo[225047]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:17 compute-0 sudo[225123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmvmxglqmtipwznxlltfsqemuphyodvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598416.5410795-550-165071568717082/AnsiballZ_stat.py'
Dec 13 04:00:17 compute-0 sudo[225123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:17 compute-0 python3.9[225125]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:00:17 compute-0 sudo[225123]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:17 compute-0 sudo[225274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovkjrweyxaltrezjbjmzcceqlztkedlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598417.4182613-550-88595747991741/AnsiballZ_copy.py'
Dec 13 04:00:17 compute-0 sudo[225274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:18 compute-0 python3.9[225276]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765598417.4182613-550-88595747991741/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:18 compute-0 sudo[225274]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:18 compute-0 ceph-mon[75071]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:18 compute-0 sudo[225350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bidfnfvhftxcoxtutlotmseypcixbggg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598417.4182613-550-88595747991741/AnsiballZ_systemd.py'
Dec 13 04:00:18 compute-0 sudo[225350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:18 compute-0 python3.9[225352]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 04:00:18 compute-0 systemd[1]: Reloading.
Dec 13 04:00:18 compute-0 systemd-rc-local-generator[225375]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:18 compute-0 systemd-sysv-generator[225382]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:18 compute-0 sudo[225350]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:19 compute-0 sudo[225461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfkuhdsoujnjkirgryqrbllmphiuweht ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598417.4182613-550-88595747991741/AnsiballZ_systemd.py'
Dec 13 04:00:19 compute-0 sudo[225461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:19 compute-0 python3.9[225463]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:19 compute-0 systemd[1]: Reloading.
Dec 13 04:00:19 compute-0 systemd-rc-local-generator[225492]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:19 compute-0 systemd-sysv-generator[225495]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:19 compute-0 systemd[1]: Starting multipathd container...
Dec 13 04:00:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7551a04b12a491be1393baec75293f272fe74004da9ce5dea0959a4ecd7b3a3d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7551a04b12a491be1393baec75293f272fe74004da9ce5dea0959a4ecd7b3a3d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:19 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562.
Dec 13 04:00:19 compute-0 podman[225503]: 2025-12-13 04:00:19.899952375 +0000 UTC m=+0.111613929 container init b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:00:19 compute-0 multipathd[225518]: + sudo -E kolla_set_configs
Dec 13 04:00:19 compute-0 podman[225503]: 2025-12-13 04:00:19.923347441 +0000 UTC m=+0.135008975 container start b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:19 compute-0 podman[225503]: multipathd
Dec 13 04:00:19 compute-0 sudo[225524]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 13 04:00:19 compute-0 sudo[225524]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 04:00:19 compute-0 systemd[1]: Started multipathd container.
Dec 13 04:00:19 compute-0 sudo[225524]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 04:00:19 compute-0 sudo[225461]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:19 compute-0 multipathd[225518]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 04:00:19 compute-0 multipathd[225518]: INFO:__main__:Validating config file
Dec 13 04:00:19 compute-0 multipathd[225518]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 04:00:19 compute-0 multipathd[225518]: INFO:__main__:Writing out command to execute
Dec 13 04:00:19 compute-0 sudo[225524]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:19 compute-0 multipathd[225518]: ++ cat /run_command
Dec 13 04:00:19 compute-0 multipathd[225518]: + CMD='/usr/sbin/multipathd -d'
Dec 13 04:00:19 compute-0 multipathd[225518]: + ARGS=
Dec 13 04:00:19 compute-0 multipathd[225518]: + sudo kolla_copy_cacerts
Dec 13 04:00:19 compute-0 podman[225525]: 2025-12-13 04:00:19.995772693 +0000 UTC m=+0.061830894 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:00:19 compute-0 sudo[225548]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 13 04:00:20 compute-0 sudo[225548]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 04:00:20 compute-0 sudo[225548]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 04:00:20 compute-0 systemd[1]: b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-3cbf85a714b31284.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 04:00:20 compute-0 systemd[1]: b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-3cbf85a714b31284.service: Failed with result 'exit-code'.
Dec 13 04:00:20 compute-0 sudo[225548]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:20 compute-0 multipathd[225518]: + [[ ! -n '' ]]
Dec 13 04:00:20 compute-0 multipathd[225518]: + . kolla_extend_start
Dec 13 04:00:20 compute-0 multipathd[225518]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 04:00:20 compute-0 multipathd[225518]: Running command: '/usr/sbin/multipathd -d'
Dec 13 04:00:20 compute-0 multipathd[225518]: + umask 0022
Dec 13 04:00:20 compute-0 multipathd[225518]: + exec /usr/sbin/multipathd -d
Dec 13 04:00:20 compute-0 multipathd[225518]: 3158.754618 | --------start up--------
Dec 13 04:00:20 compute-0 multipathd[225518]: 3158.754656 | read /etc/multipath.conf
Dec 13 04:00:20 compute-0 multipathd[225518]: 3158.759510 | path checkers start up
Dec 13 04:00:20 compute-0 ceph-mon[75071]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:20 compute-0 python3.9[225706]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:00:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:21 compute-0 sudo[225858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdjdtnolyugtquvbnduggytckwouhiiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598420.8167512-586-51778048773515/AnsiballZ_command.py'
Dec 13 04:00:21 compute-0 sudo[225858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:21 compute-0 python3.9[225860]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:00:21 compute-0 sudo[225858]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:21 compute-0 sudo[226023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unjczlduhjmaiekqjygvnvdwllzlshco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598421.5323904-594-27510372961807/AnsiballZ_systemd.py'
Dec 13 04:00:21 compute-0 sudo[226023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:22 compute-0 ceph-mon[75071]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:22 compute-0 python3.9[226025]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 04:00:22 compute-0 systemd[1]: Stopping multipathd container...
Dec 13 04:00:22 compute-0 multipathd[225518]: 3161.025289 | exit (signal)
Dec 13 04:00:22 compute-0 multipathd[225518]: 3161.026322 | --------shut down-------
Dec 13 04:00:22 compute-0 systemd[1]: libpod-b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562.scope: Deactivated successfully.
Dec 13 04:00:22 compute-0 podman[226029]: 2025-12-13 04:00:22.321068005 +0000 UTC m=+0.074095407 container died b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:00:22 compute-0 systemd[1]: b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-3cbf85a714b31284.timer: Deactivated successfully.
Dec 13 04:00:22 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562.
Dec 13 04:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-userdata-shm.mount: Deactivated successfully.
Dec 13 04:00:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-7551a04b12a491be1393baec75293f272fe74004da9ce5dea0959a4ecd7b3a3d-merged.mount: Deactivated successfully.
Dec 13 04:00:22 compute-0 podman[226029]: 2025-12-13 04:00:22.545450801 +0000 UTC m=+0.298478213 container cleanup b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS)
Dec 13 04:00:22 compute-0 podman[226029]: multipathd
Dec 13 04:00:22 compute-0 podman[226060]: multipathd
Dec 13 04:00:22 compute-0 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec 13 04:00:22 compute-0 systemd[1]: Stopped multipathd container.
Dec 13 04:00:22 compute-0 systemd[1]: Starting multipathd container...
Dec 13 04:00:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7551a04b12a491be1393baec75293f272fe74004da9ce5dea0959a4ecd7b3a3d/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7551a04b12a491be1393baec75293f272fe74004da9ce5dea0959a4ecd7b3a3d/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 04:00:22 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562.
Dec 13 04:00:22 compute-0 podman[226074]: 2025-12-13 04:00:22.729313035 +0000 UTC m=+0.104786942 container init b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:00:22 compute-0 multipathd[226089]: + sudo -E kolla_set_configs
Dec 13 04:00:22 compute-0 sudo[226095]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Dec 13 04:00:22 compute-0 sudo[226095]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 04:00:22 compute-0 sudo[226095]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 04:00:22 compute-0 podman[226074]: 2025-12-13 04:00:22.764501253 +0000 UTC m=+0.139975130 container start b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:00:22 compute-0 podman[226074]: multipathd
Dec 13 04:00:22 compute-0 systemd[1]: Started multipathd container.
Dec 13 04:00:22 compute-0 sudo[226023]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:22 compute-0 multipathd[226089]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 04:00:22 compute-0 multipathd[226089]: INFO:__main__:Validating config file
Dec 13 04:00:22 compute-0 multipathd[226089]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 04:00:22 compute-0 multipathd[226089]: INFO:__main__:Writing out command to execute
Dec 13 04:00:22 compute-0 sudo[226095]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:22 compute-0 multipathd[226089]: ++ cat /run_command
Dec 13 04:00:22 compute-0 podman[226096]: 2025-12-13 04:00:22.838304392 +0000 UTC m=+0.063283144 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:00:22 compute-0 multipathd[226089]: + CMD='/usr/sbin/multipathd -d'
Dec 13 04:00:22 compute-0 multipathd[226089]: + ARGS=
Dec 13 04:00:22 compute-0 multipathd[226089]: + sudo kolla_copy_cacerts
Dec 13 04:00:22 compute-0 systemd[1]: b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-5b6e260626288ffa.service: Main process exited, code=exited, status=1/FAILURE
Dec 13 04:00:22 compute-0 systemd[1]: b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562-5b6e260626288ffa.service: Failed with result 'exit-code'.
Dec 13 04:00:22 compute-0 sudo[226126]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Dec 13 04:00:22 compute-0 sudo[226126]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Dec 13 04:00:22 compute-0 sudo[226126]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Dec 13 04:00:22 compute-0 sudo[226126]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:22 compute-0 multipathd[226089]: + [[ ! -n '' ]]
Dec 13 04:00:22 compute-0 multipathd[226089]: + . kolla_extend_start
Dec 13 04:00:22 compute-0 multipathd[226089]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec 13 04:00:22 compute-0 multipathd[226089]: Running command: '/usr/sbin/multipathd -d'
Dec 13 04:00:22 compute-0 multipathd[226089]: + umask 0022
Dec 13 04:00:22 compute-0 multipathd[226089]: + exec /usr/sbin/multipathd -d
Dec 13 04:00:22 compute-0 multipathd[226089]: 3161.612653 | --------start up--------
Dec 13 04:00:22 compute-0 multipathd[226089]: 3161.612683 | read /etc/multipath.conf
Dec 13 04:00:22 compute-0 multipathd[226089]: 3161.619250 | path checkers start up
Dec 13 04:00:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:23 compute-0 sudo[226278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqhqgwdokkymsbvualvpulvhhqymwqsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598422.9823613-602-9776775886020/AnsiballZ_file.py'
Dec 13 04:00:23 compute-0 sudo[226278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:23 compute-0 python3.9[226280]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:23 compute-0 sudo[226278]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:24 compute-0 sudo[226430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbhsiamanwsxoesqdqnfjkrqljdljayh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598423.809027-614-177277552357670/AnsiballZ_file.py'
Dec 13 04:00:24 compute-0 sudo[226430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:24 compute-0 ceph-mon[75071]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:24 compute-0 python3.9[226432]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec 13 04:00:24 compute-0 sudo[226430]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:24 compute-0 sudo[226582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwthivjzpgvocbkgewkljoeehedcccmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598424.450359-622-169817298166974/AnsiballZ_modprobe.py'
Dec 13 04:00:24 compute-0 sudo[226582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:24 compute-0 python3.9[226584]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec 13 04:00:24 compute-0 kernel: Key type psk registered
Dec 13 04:00:24 compute-0 sudo[226582]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:25 compute-0 sudo[226745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abdyqekcktphboaplkzxvoldqgmcmpnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598425.0650876-630-70369294596332/AnsiballZ_stat.py'
Dec 13 04:00:25 compute-0 sudo[226745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:25 compute-0 python3.9[226747]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:00:25 compute-0 sudo[226745]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:25 compute-0 sudo[226868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yatodgevsphcdnoqsjtoebagmvmxwrtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598425.0650876-630-70369294596332/AnsiballZ_copy.py'
Dec 13 04:00:25 compute-0 sudo[226868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:26 compute-0 python3.9[226870]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765598425.0650876-630-70369294596332/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:26 compute-0 sudo[226868]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:26 compute-0 ceph-mon[75071]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:26 compute-0 sudo[227020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwwbxwidkjppgcbcmeokkqzdcxxcesfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598426.2972205-646-95203008750584/AnsiballZ_lineinfile.py'
Dec 13 04:00:26 compute-0 sudo[227020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:26 compute-0 python3.9[227022]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:26 compute-0 sudo[227020]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:27 compute-0 sudo[227172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chmykqapgscishwengexryywrirqnoac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598426.9358885-654-47967028550569/AnsiballZ_systemd.py'
Dec 13 04:00:27 compute-0 sudo[227172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:27 compute-0 python3.9[227174]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 04:00:27 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec 13 04:00:27 compute-0 systemd[1]: Stopped Load Kernel Modules.
Dec 13 04:00:27 compute-0 systemd[1]: Stopping Load Kernel Modules...
Dec 13 04:00:27 compute-0 systemd[1]: Starting Load Kernel Modules...
Dec 13 04:00:27 compute-0 systemd[1]: Finished Load Kernel Modules.
Dec 13 04:00:27 compute-0 sudo[227172]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:28 compute-0 sudo[227328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpfhbehfdkzkawxfctcpgbxehgjruxbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598427.8308778-662-39577234984286/AnsiballZ_dnf.py'
Dec 13 04:00:28 compute-0 sudo[227328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:28 compute-0 ceph-mon[75071]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:28 compute-0 python3.9[227330]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec 13 04:00:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:30 compute-0 ceph-mon[75071]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:30 compute-0 systemd[1]: Reloading.
Dec 13 04:00:30 compute-0 systemd-sysv-generator[227365]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:30 compute-0 systemd-rc-local-generator[227358]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:31 compute-0 systemd[1]: Reloading.
Dec 13 04:00:31 compute-0 systemd-sysv-generator[227400]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:31 compute-0 systemd-rc-local-generator[227397]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:31 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event0 (Power Button)
Dec 13 04:00:31 compute-0 systemd-logind[796]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec 13 04:00:31 compute-0 lvm[227444]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:00:31 compute-0 lvm[227444]: VG ceph_vg1 finished
Dec 13 04:00:31 compute-0 lvm[227445]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:00:31 compute-0 lvm[227445]: VG ceph_vg2 finished
Dec 13 04:00:31 compute-0 lvm[227446]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:00:31 compute-0 lvm[227446]: VG ceph_vg0 finished
Dec 13 04:00:31 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec 13 04:00:31 compute-0 systemd[1]: Starting man-db-cache-update.service...
Dec 13 04:00:31 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec 13 04:00:31 compute-0 systemd[1]: Reloading.
Dec 13 04:00:31 compute-0 systemd-rc-local-generator[227500]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:31 compute-0 systemd-sysv-generator[227506]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:32 compute-0 ceph-mon[75071]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Dec 13 04:00:32 compute-0 sudo[227328]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:33 compute-0 sudo[228800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybdazzqwwdihchnwvuanjbjsegjisvyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598432.8269641-670-60637795337012/AnsiballZ_systemd_service.py'
Dec 13 04:00:33 compute-0 sudo[228800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:33 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec 13 04:00:33 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec 13 04:00:33 compute-0 systemd[1]: Finished man-db-cache-update.service.
Dec 13 04:00:33 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.376s CPU time.
Dec 13 04:00:33 compute-0 systemd[1]: run-r00fe0aed6dc54b0eb06ce4c2d9e8e5ef.service: Deactivated successfully.
Dec 13 04:00:33 compute-0 podman[228803]: 2025-12-13 04:00:33.179617453 +0000 UTC m=+0.096945689 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:00:33 compute-0 python3.9[228802]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 04:00:33 compute-0 systemd[1]: Stopping Open-iSCSI...
Dec 13 04:00:33 compute-0 iscsid[216538]: iscsid shutting down.
Dec 13 04:00:33 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Dec 13 04:00:33 compute-0 systemd[1]: Stopped Open-iSCSI.
Dec 13 04:00:33 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec 13 04:00:33 compute-0 systemd[1]: Starting Open-iSCSI...
Dec 13 04:00:33 compute-0 systemd[1]: Started Open-iSCSI.
Dec 13 04:00:33 compute-0 sudo[228800]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:34 compute-0 python3.9[228985]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec 13 04:00:34 compute-0 ceph-mon[75071]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:34 compute-0 sudo[229139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tboxmolnarjjawicpchaamjusdtkutke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598434.6072776-688-24388534643623/AnsiballZ_file.py'
Dec 13 04:00:34 compute-0 sudo[229139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:35 compute-0 python3.9[229141]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:35 compute-0 sudo[229139]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:00:35.073 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:00:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:00:35.074 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:00:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:00:35.075 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:00:35 compute-0 sudo[229291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pllszvxgtwoieeuhlcodtjoniwyigfjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598435.3904138-699-178124575430855/AnsiballZ_systemd_service.py'
Dec 13 04:00:35 compute-0 sudo[229291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:35 compute-0 python3.9[229293]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 04:00:35 compute-0 systemd[1]: Reloading.
Dec 13 04:00:36 compute-0 systemd-rc-local-generator[229319]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:36 compute-0 systemd-sysv-generator[229322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:36 compute-0 ceph-mon[75071]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:36 compute-0 sudo[229291]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:36 compute-0 python3.9[229478]: ansible-ansible.builtin.service_facts Invoked
Dec 13 04:00:36 compute-0 network[229495]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec 13 04:00:36 compute-0 network[229496]: 'network-scripts' will be removed from distribution in near future.
Dec 13 04:00:36 compute-0 network[229497]: It is advised to switch to 'NetworkManager' instead for network management.
Dec 13 04:00:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:38 compute-0 ceph-mon[75071]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:40 compute-0 sudo[229782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndtfehfmfctmveobioccgblkrorrmthi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598439.9595745-718-216804530411982/AnsiballZ_systemd_service.py'
Dec 13 04:00:40 compute-0 sudo[229782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:40 compute-0 ceph-mon[75071]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:40 compute-0 podman[229744]: 2025-12-13 04:00:40.251113219 +0000 UTC m=+0.059275104 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:00:40 compute-0 python3.9[229790]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:00:40
Dec 13 04:00:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:00:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:00:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'volumes', 'vms', '.mgr', 'images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Dec 13 04:00:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:00:40 compute-0 sudo[229782]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:40 compute-0 sudo[229943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pinlfwwxnwqeulfeqdgiudamosscckmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598440.675893-718-133535084195712/AnsiballZ_systemd_service.py'
Dec 13 04:00:40 compute-0 sudo[229943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:41 compute-0 python3.9[229945]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:41 compute-0 sudo[229943]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:41 compute-0 sudo[230096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iybsljizijpfmoiszmquyaxgifcemthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598441.4038713-718-57752892230714/AnsiballZ_systemd_service.py'
Dec 13 04:00:41 compute-0 sudo[230096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:41 compute-0 python3.9[230098]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:42 compute-0 sudo[230096]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:42 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec 13 04:00:42 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:00:42 compute-0 ceph-mon[75071]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:00:42 compute-0 sudo[230251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqcrnqqddzoqihsqkkgoilzebsqtzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598442.2071536-718-255609384659601/AnsiballZ_systemd_service.py'
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:00:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:00:42 compute-0 sudo[230251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:42 compute-0 python3.9[230253]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:42 compute-0 sudo[230251]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:43 compute-0 sudo[230404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxhnvzbikjmwrsyzmmacaaacbquguvpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598443.088611-718-269808681846571/AnsiballZ_systemd_service.py'
Dec 13 04:00:43 compute-0 sudo[230404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:43 compute-0 python3.9[230406]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:43 compute-0 sudo[230404]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:44 compute-0 sudo[230557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jovrbyzauuoheqmtnkabgebzvfzhljcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598443.7971992-718-221343594389654/AnsiballZ_systemd_service.py'
Dec 13 04:00:44 compute-0 sudo[230557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:44 compute-0 ceph-mon[75071]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:44 compute-0 python3.9[230559]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:44 compute-0 sudo[230557]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:44 compute-0 sudo[230710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzpcswtbhygfqfbvpvaaaehrkyovhare ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598444.5258567-718-243453522785516/AnsiballZ_systemd_service.py'
Dec 13 04:00:44 compute-0 sudo[230710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:45 compute-0 python3.9[230712]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:45 compute-0 sudo[230710]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:45 compute-0 sudo[230863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vubuvdxsvvdgdhkskqwntiiajuloxnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598445.1780038-718-84308748562311/AnsiballZ_systemd_service.py'
Dec 13 04:00:45 compute-0 sudo[230863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:45 compute-0 ceph-mon[75071]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:45 compute-0 python3.9[230865]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:00:45 compute-0 sudo[230863]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:46 compute-0 sudo[231016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlaucpseuekwctafgbhrxmcguvsyfrxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598446.0144033-777-227153587654943/AnsiballZ_file.py'
Dec 13 04:00:46 compute-0 sudo[231016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:46 compute-0 python3.9[231018]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:46 compute-0 sudo[231016]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:46 compute-0 sudo[231168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtqzzckobxwvznxvjmxawcnqxbtmayyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598446.5570583-777-161223769839566/AnsiballZ_file.py'
Dec 13 04:00:46 compute-0 sudo[231168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:46 compute-0 python3.9[231170]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:47 compute-0 sudo[231168]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:47 compute-0 sudo[231320]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzelgflumozzfjpzzwhyzjigvuzkkgfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598447.1109478-777-34006590489521/AnsiballZ_file.py'
Dec 13 04:00:47 compute-0 sudo[231320]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:47 compute-0 python3.9[231322]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:47 compute-0 sudo[231320]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:48 compute-0 sudo[231472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibbzcuvotmmaowelphigcanzkxicvgvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598447.7894225-777-206658379517408/AnsiballZ_file.py'
Dec 13 04:00:48 compute-0 sudo[231472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:48 compute-0 ceph-mon[75071]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:00:48 compute-0 python3.9[231474]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:48 compute-0 sudo[231472]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:48 compute-0 sudo[231624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elmqezbhtiypswphtsxlbrcugtwtniac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598448.3485527-777-211870409311706/AnsiballZ_file.py'
Dec 13 04:00:48 compute-0 sudo[231624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:48 compute-0 python3.9[231626]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:48 compute-0 sudo[231624]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 04:00:49 compute-0 sudo[231776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldbnemugvhfdlvgnhidadwgibhqxuoun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598448.935896-777-248196204044982/AnsiballZ_file.py'
Dec 13 04:00:49 compute-0 sudo[231776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:49 compute-0 python3.9[231778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:49 compute-0 sudo[231776]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:49 compute-0 sudo[231928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zldvnxdqtbxdltxdboihamndeqqtrvte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598449.544923-777-12185623806611/AnsiballZ_file.py'
Dec 13 04:00:49 compute-0 sudo[231928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:50 compute-0 python3.9[231930]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:50 compute-0 sudo[231928]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:50 compute-0 ceph-mon[75071]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 9 op/s
Dec 13 04:00:50 compute-0 sudo[232080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuxoqmkjazvbomaebzesspogrqoetzid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598450.1880908-777-153762896859130/AnsiballZ_file.py'
Dec 13 04:00:50 compute-0 sudo[232080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:50 compute-0 python3.9[232082]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:50 compute-0 sudo[232080]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:51 compute-0 sudo[232232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkwzawqpynbqetvmnevnvgntnlhjhvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598450.816472-834-215379351746711/AnsiballZ_file.py'
Dec 13 04:00:51 compute-0 sudo[232232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:51 compute-0 python3.9[232234]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:51 compute-0 sudo[232232]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:51 compute-0 sudo[232384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaymkoltmornyxqawffxyhqyldopzuxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598451.497882-834-163566521689857/AnsiballZ_file.py'
Dec 13 04:00:51 compute-0 sudo[232384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:52 compute-0 python3.9[232386]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:52 compute-0 sudo[232384]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:52 compute-0 ceph-mon[75071]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:00:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:00:52 compute-0 sudo[232536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dizbiknjhhqoeksyaeqkjpnvedktrpxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598452.2012892-834-75453766566202/AnsiballZ_file.py'
Dec 13 04:00:52 compute-0 sudo[232536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:52 compute-0 python3.9[232538]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:52 compute-0 sudo[232536]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:53 compute-0 sudo[232701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkgzchjhsshajpcqeoufimbhmcvbciyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598452.8200915-834-87051445258245/AnsiballZ_file.py'
Dec 13 04:00:53 compute-0 sudo[232701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:53 compute-0 podman[232662]: 2025-12-13 04:00:53.139320061 +0000 UTC m=+0.047394291 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 04:00:53 compute-0 python3.9[232710]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:53 compute-0 sudo[232701]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:53 compute-0 sudo[232861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uuhouvdlwgotuavtqkpvlmiosxbyqczu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598453.4662433-834-58071778592565/AnsiballZ_file.py'
Dec 13 04:00:53 compute-0 sudo[232861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:53 compute-0 python3.9[232863]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:53 compute-0 sudo[232861]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:54 compute-0 ceph-mon[75071]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:54 compute-0 sudo[233013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqluqtatrjkxmbgsakppxszavousrch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598454.0638087-834-79515455426294/AnsiballZ_file.py'
Dec 13 04:00:54 compute-0 sudo[233013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:54 compute-0 python3.9[233015]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:54 compute-0 sudo[233013]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:54 compute-0 sudo[233165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxzcuvddvkmasinlbeanomwwdopgtwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598454.7060337-834-137064616630980/AnsiballZ_file.py'
Dec 13 04:00:54 compute-0 sudo[233165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:55 compute-0 python3.9[233167]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:55 compute-0 sudo[233165]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:55 compute-0 sudo[233317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rljzjxxcwrlcmwsxfsigcwtlxmlyadpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598455.2841475-834-24720557205106/AnsiballZ_file.py'
Dec 13 04:00:55 compute-0 sudo[233317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:55 compute-0 python3.9[233319]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:00:55 compute-0 sudo[233317]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:56 compute-0 ceph-mon[75071]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:56 compute-0 sudo[233469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhupnebvqlqanokylldsntjreaojfvwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598456.019298-892-173352730784171/AnsiballZ_command.py'
Dec 13 04:00:56 compute-0 sudo[233469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:56 compute-0 python3.9[233471]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:00:56 compute-0 sudo[233469]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:57 compute-0 python3.9[233623]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec 13 04:00:57 compute-0 sudo[233773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiprxbevwbonwbjfrqdhgtxjrrkyvtlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598457.5522969-910-101909654890572/AnsiballZ_systemd_service.py'
Dec 13 04:00:57 compute-0 sudo[233773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:58 compute-0 python3.9[233775]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 04:00:58 compute-0 systemd[1]: Reloading.
Dec 13 04:00:58 compute-0 ceph-mon[75071]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:58 compute-0 systemd-rc-local-generator[233800]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:00:58 compute-0 systemd-sysv-generator[233804]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:00:58 compute-0 sudo[233773]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:58 compute-0 sudo[233960]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aemzmkkyvwvobvvagendjiiaajbiydex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598458.6437476-918-153815522573515/AnsiballZ_command.py'
Dec 13 04:00:58 compute-0 sudo[233960]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:00:59 compute-0 python3.9[233962]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:00:59 compute-0 sudo[233960]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:59 compute-0 sudo[234113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-foibenlkolzdrevpkdhyzaazmnrzvyiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598459.2263668-918-14588571921666/AnsiballZ_command.py'
Dec 13 04:00:59 compute-0 sudo[234113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:00:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.862134) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459862199, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1676, "num_deletes": 251, "total_data_size": 2845223, "memory_usage": 2899496, "flush_reason": "Manual Compaction"}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459872955, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1614227, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11823, "largest_seqno": 13498, "table_properties": {"data_size": 1608616, "index_size": 2751, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13938, "raw_average_key_size": 20, "raw_value_size": 1596382, "raw_average_value_size": 2303, "num_data_blocks": 127, "num_entries": 693, "num_filter_entries": 693, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598271, "oldest_key_time": 1765598271, "file_creation_time": 1765598459, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 10863 microseconds, and 4825 cpu microseconds.
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.872996) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1614227 bytes OK
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.873018) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.874505) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.874518) EVENT_LOG_v1 {"time_micros": 1765598459874515, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.874533) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2838040, prev total WAL file size 2838040, number of live WAL files 2.
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.875622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353033' seq:0, type:0; will stop at (end)
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1576KB)], [29(8162KB)]
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459875673, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9972675, "oldest_snapshot_seqno": -1}
Dec 13 04:00:59 compute-0 python3.9[234115]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:00:59 compute-0 sudo[234113]: pam_unix(sudo:session): session closed for user root
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4015 keys, 7803076 bytes, temperature: kUnknown
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459936652, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7803076, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7774305, "index_size": 17653, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95739, "raw_average_key_size": 23, "raw_value_size": 7699993, "raw_average_value_size": 1917, "num_data_blocks": 768, "num_entries": 4015, "num_filter_entries": 4015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598459, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.936961) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7803076 bytes
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.938318) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.3 rd, 127.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 8.0 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(11.0) write-amplify(4.8) OK, records in: 4440, records dropped: 425 output_compression: NoCompression
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.938359) EVENT_LOG_v1 {"time_micros": 1765598459938344, "job": 12, "event": "compaction_finished", "compaction_time_micros": 61088, "compaction_time_cpu_micros": 21668, "output_level": 6, "num_output_files": 1, "total_output_size": 7803076, "num_input_records": 4440, "num_output_records": 4015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459938738, "job": 12, "event": "table_file_deletion", "file_number": 31}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598459940324, "job": 12, "event": "table_file_deletion", "file_number": 29}
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.875457) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.940382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.940389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.940392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.940394) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:00:59 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:00:59.940396) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:01:00 compute-0 ceph-mon[75071]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Dec 13 04:01:00 compute-0 sudo[234266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psxgxylphioavfoihbhrmfercwpnwuyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598460.0442579-918-226737806631661/AnsiballZ_command.py'
Dec 13 04:01:00 compute-0 sudo[234266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:00 compute-0 python3.9[234268]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:00 compute-0 sudo[234266]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:00 compute-0 sudo[234419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enllwhcxtpyjdecthuhpgbkbtztlbvme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598460.677867-918-171421923535573/AnsiballZ_command.py'
Dec 13 04:01:00 compute-0 sudo[234419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 04:01:01 compute-0 python3.9[234421]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:01 compute-0 sudo[234419]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:01 compute-0 sudo[234572]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tswnivoozmdrumqejuirhyeoysmffqot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598461.3614106-918-29616436375017/AnsiballZ_command.py'
Dec 13 04:01:01 compute-0 sudo[234572]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:01 compute-0 CROND[234576]: (root) CMD (run-parts /etc/cron.hourly)
Dec 13 04:01:01 compute-0 run-parts[234579]: (/etc/cron.hourly) starting 0anacron
Dec 13 04:01:01 compute-0 python3.9[234574]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:01 compute-0 anacron[234588]: Anacron started on 2025-12-13
Dec 13 04:01:01 compute-0 anacron[234588]: Will run job `cron.daily' in 35 min.
Dec 13 04:01:01 compute-0 anacron[234588]: Will run job `cron.weekly' in 55 min.
Dec 13 04:01:01 compute-0 anacron[234588]: Will run job `cron.monthly' in 75 min.
Dec 13 04:01:01 compute-0 anacron[234588]: Jobs will be executed sequentially
Dec 13 04:01:01 compute-0 run-parts[234590]: (/etc/cron.hourly) finished 0anacron
Dec 13 04:01:01 compute-0 CROND[234575]: (root) CMDEND (run-parts /etc/cron.hourly)
Dec 13 04:01:01 compute-0 sudo[234572]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:02 compute-0 ceph-mon[75071]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 0 B/s wr, 5 op/s
Dec 13 04:01:02 compute-0 sudo[234740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hspdqlhrrarfncduoqfkrgeppzlredsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598461.9732301-918-50882488911093/AnsiballZ_command.py'
Dec 13 04:01:02 compute-0 sudo[234740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:02 compute-0 python3.9[234742]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:02 compute-0 sudo[234740]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:02 compute-0 sudo[234893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwfkqqriqbmbemfkqdksdotcmmhvxldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598462.5593414-918-59458140381060/AnsiballZ_command.py'
Dec 13 04:01:02 compute-0 sudo[234893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:02 compute-0 python3.9[234895]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:02 compute-0 sudo[234893]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:03 compute-0 sudo[235056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiuccqrachmutpoiybhipzdfltpekpvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598463.103477-918-144837212645953/AnsiballZ_command.py'
Dec 13 04:01:03 compute-0 sudo[235056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:03 compute-0 podman[235020]: 2025-12-13 04:01:03.440932773 +0000 UTC m=+0.101022070 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:03 compute-0 python3.9[235060]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec 13 04:01:03 compute-0 sudo[235056]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:04 compute-0 ceph-mon[75071]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:04 compute-0 sudo[235224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxhwecgjbxhqaaiapxwvjmyochkfmyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598464.5128891-997-109565693883773/AnsiballZ_file.py'
Dec 13 04:01:04 compute-0 sudo[235224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:04 compute-0 python3.9[235226]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:04 compute-0 sudo[235224]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:05 compute-0 sudo[235376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptdkwufroledxlvqlnfhwxdummzlttkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598465.0801551-997-264530860586507/AnsiballZ_file.py'
Dec 13 04:01:05 compute-0 sudo[235376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:05 compute-0 python3.9[235378]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:05 compute-0 sudo[235376]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:05 compute-0 sudo[235528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcebmstxfbcygvivxxxtadkpqswtaacl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598465.6734238-997-162416901235830/AnsiballZ_file.py'
Dec 13 04:01:05 compute-0 sudo[235528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:06 compute-0 python3.9[235530]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:06 compute-0 sudo[235528]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:06 compute-0 ceph-mon[75071]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:06 compute-0 sudo[235680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpximgxkjaxbfflgxmjmuwpwxzqcvigt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598466.390674-1019-26669489738299/AnsiballZ_file.py'
Dec 13 04:01:06 compute-0 sudo[235680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:06 compute-0 python3.9[235682]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:06 compute-0 sudo[235680]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:07 compute-0 sudo[235832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipcrannzaqfmurabhlwaebrcqkmlgqsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598466.9996274-1019-55138253813953/AnsiballZ_file.py'
Dec 13 04:01:07 compute-0 sudo[235832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:07 compute-0 python3.9[235834]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:07 compute-0 sudo[235832]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:07 compute-0 sudo[235984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovtljwnyfpoavryoarbepnccbkyqxwfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598467.568223-1019-18421468978200/AnsiballZ_file.py'
Dec 13 04:01:07 compute-0 sudo[235984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:08 compute-0 python3.9[235986]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:08 compute-0 sudo[235984]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:08 compute-0 ceph-mon[75071]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:08 compute-0 sudo[236136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfplxxnwfozdqpnrezhixbmsziouskey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598468.1388845-1019-153170779619491/AnsiballZ_file.py'
Dec 13 04:01:08 compute-0 sudo[236136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:08 compute-0 python3.9[236138]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:08 compute-0 sudo[236136]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:08 compute-0 sudo[236238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:01:08 compute-0 sudo[236238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:08 compute-0 sudo[236238]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:08 compute-0 sudo[236280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:01:08 compute-0 sudo[236280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:08 compute-0 sudo[236338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovmjmjvmaqjdifqhiadhwlxqsgyzamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598468.7078872-1019-127346799033273/AnsiballZ_file.py'
Dec 13 04:01:08 compute-0 sudo[236338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:09 compute-0 python3.9[236340]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:09 compute-0 sudo[236338]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:09 compute-0 sudo[236280]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:01:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:09 compute-0 sudo[236493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:01:09 compute-0 sudo[236493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:09 compute-0 sudo[236493]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:09 compute-0 sudo[236543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvjxjlcdnvgnjlfhtivpaptxighwqhji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598469.2739425-1019-132483933565874/AnsiballZ_file.py'
Dec 13 04:01:09 compute-0 sudo[236543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:09 compute-0 sudo[236546]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:01:09 compute-0 sudo[236546]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:09 compute-0 python3.9[236548]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:09 compute-0 sudo[236543]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.805389188 +0000 UTC m=+0.038919300 container create c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:01:09 compute-0 systemd[1]: Started libpod-conmon-c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885.scope.
Dec 13 04:01:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.788015394 +0000 UTC m=+0.021545526 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.895705175 +0000 UTC m=+0.129235367 container init c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.903182559 +0000 UTC m=+0.136712671 container start c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.906779677 +0000 UTC m=+0.140309789 container attach c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:01:09 compute-0 romantic_dhawan[236648]: 167 167
Dec 13 04:01:09 compute-0 systemd[1]: libpod-c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885.scope: Deactivated successfully.
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.909023478 +0000 UTC m=+0.142553590 container died c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:01:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-26c8743fce15e8e09049f1d44aed4199bab04732578af15ec2058eb710b7012e-merged.mount: Deactivated successfully.
Dec 13 04:01:09 compute-0 podman[236604]: 2025-12-13 04:01:09.946713453 +0000 UTC m=+0.180243555 container remove c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:01:09 compute-0 systemd[1]: libpod-conmon-c797b856aefb66225e1bdea1715cf4045ca1942b7f36170198007a4d94fe1885.scope: Deactivated successfully.
Dec 13 04:01:10 compute-0 sudo[236780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccfprsmmuvyddobqtbvllluwkdfxpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598469.8384657-1019-234639848449300/AnsiballZ_file.py'
Dec 13 04:01:10 compute-0 sudo[236780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.102414361 +0000 UTC m=+0.047580876 container create 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:01:10 compute-0 systemd[1]: Started libpod-conmon-9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e.scope.
Dec 13 04:01:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.167254946 +0000 UTC m=+0.112421481 container init 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.085603404 +0000 UTC m=+0.030769939 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.179349404 +0000 UTC m=+0.124515909 container start 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.183485687 +0000 UTC m=+0.128652232 container attach 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:01:10 compute-0 ceph-mon[75071]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:01:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:01:10 compute-0 python3.9[236789]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:10 compute-0 sudo[236780]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:10 compute-0 podman[236798]: 2025-12-13 04:01:10.356920896 +0000 UTC m=+0.050544566 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:01:10 compute-0 wonderful_varahamihira[236793]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:01:10 compute-0 wonderful_varahamihira[236793]: --> All data devices are unavailable
Dec 13 04:01:10 compute-0 systemd[1]: libpod-9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e.scope: Deactivated successfully.
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.642079367 +0000 UTC m=+0.587245902 container died 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:01:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-c27d5ae3bd621509bb8e4273d5da6d251f226e115654ad16c150a961e8935a19-merged.mount: Deactivated successfully.
Dec 13 04:01:10 compute-0 podman[236757]: 2025-12-13 04:01:10.681849649 +0000 UTC m=+0.627016154 container remove 9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:01:10 compute-0 systemd[1]: libpod-conmon-9a39564496f86c0eb28bd10c28beef790f48f8a48fdddae6a0b933a3f635d13e.scope: Deactivated successfully.
Dec 13 04:01:10 compute-0 sudo[236546]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:10 compute-0 sudo[236868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:01:10 compute-0 sudo[236868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:10 compute-0 sudo[236868]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:10 compute-0 sudo[236893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:01:10 compute-0 sudo[236893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.10664981 +0000 UTC m=+0.043058332 container create 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:01:11 compute-0 systemd[1]: Started libpod-conmon-5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e.scope.
Dec 13 04:01:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.086092321 +0000 UTC m=+0.022500823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.188502008 +0000 UTC m=+0.124910540 container init 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.194770029 +0000 UTC m=+0.131178531 container start 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.19811751 +0000 UTC m=+0.134526012 container attach 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:01:11 compute-0 strange_curran[236946]: 167 167
Dec 13 04:01:11 compute-0 systemd[1]: libpod-5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e.scope: Deactivated successfully.
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.199720773 +0000 UTC m=+0.136129275 container died 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-a4362667440836a3220f70c34dfdef136376dbd0fef58c3965def873ed382cea-merged.mount: Deactivated successfully.
Dec 13 04:01:11 compute-0 podman[236930]: 2025-12-13 04:01:11.235658831 +0000 UTC m=+0.172067333 container remove 5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_curran, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:01:11 compute-0 systemd[1]: libpod-conmon-5d326374159b2cc50e0967004ab6e929a717e98979a78fb334ce4eb784eccf9e.scope: Deactivated successfully.
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.367613012 +0000 UTC m=+0.033418220 container create 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:01:11 compute-0 systemd[1]: Started libpod-conmon-545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096.scope.
Dec 13 04:01:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bfb43329a419ecf924bbd1f7d4826bbf8bfb7012114020591d05b661c14928/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bfb43329a419ecf924bbd1f7d4826bbf8bfb7012114020591d05b661c14928/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bfb43329a419ecf924bbd1f7d4826bbf8bfb7012114020591d05b661c14928/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6bfb43329a419ecf924bbd1f7d4826bbf8bfb7012114020591d05b661c14928/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.42633972 +0000 UTC m=+0.092144958 container init 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.432526328 +0000 UTC m=+0.098331536 container start 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.435551382 +0000 UTC m=+0.101356600 container attach 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.352716287 +0000 UTC m=+0.018521525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:11 compute-0 romantic_lalande[236987]: {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     "0": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "devices": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "/dev/loop3"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             ],
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_name": "ceph_lv0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_size": "21470642176",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "name": "ceph_lv0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "tags": {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_name": "ceph",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.crush_device_class": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.encrypted": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.objectstore": "bluestore",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_id": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.vdo": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.with_tpm": "0"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             },
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "vg_name": "ceph_vg0"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         }
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     ],
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     "1": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "devices": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "/dev/loop4"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             ],
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_name": "ceph_lv1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_size": "21470642176",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "name": "ceph_lv1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "tags": {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_name": "ceph",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.crush_device_class": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.encrypted": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.objectstore": "bluestore",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_id": "1",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.vdo": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.with_tpm": "0"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             },
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "vg_name": "ceph_vg1"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         }
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     ],
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     "2": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "devices": [
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "/dev/loop5"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             ],
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_name": "ceph_lv2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_size": "21470642176",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "name": "ceph_lv2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "tags": {
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.cluster_name": "ceph",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.crush_device_class": "",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.encrypted": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.objectstore": "bluestore",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osd_id": "2",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.vdo": "0",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:                 "ceph.with_tpm": "0"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             },
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "type": "block",
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:             "vg_name": "ceph_vg2"
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:         }
Dec 13 04:01:11 compute-0 romantic_lalande[236987]:     ]
Dec 13 04:01:11 compute-0 romantic_lalande[236987]: }
Dec 13 04:01:11 compute-0 systemd[1]: libpod-545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096.scope: Deactivated successfully.
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.727169668 +0000 UTC m=+0.392974896 container died 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:01:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6bfb43329a419ecf924bbd1f7d4826bbf8bfb7012114020591d05b661c14928-merged.mount: Deactivated successfully.
Dec 13 04:01:11 compute-0 podman[236970]: 2025-12-13 04:01:11.771217186 +0000 UTC m=+0.437022394 container remove 545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 04:01:11 compute-0 systemd[1]: libpod-conmon-545a1f2d23409af221b97925050725b183377c52cecdcd51486c6116569a7096.scope: Deactivated successfully.
Dec 13 04:01:11 compute-0 sudo[236893]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:11 compute-0 sudo[237007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:01:11 compute-0 sudo[237007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:11 compute-0 sudo[237007]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:11 compute-0 sudo[237032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:01:11 compute-0 sudo[237032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.204481607 +0000 UTC m=+0.036073533 container create e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:01:12 compute-0 ceph-mon[75071]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:12 compute-0 systemd[1]: Started libpod-conmon-e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f.scope.
Dec 13 04:01:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.269589669 +0000 UTC m=+0.101181625 container init e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.275184811 +0000 UTC m=+0.106776737 container start e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True)
Dec 13 04:01:12 compute-0 nice_ishizaka[237086]: 167 167
Dec 13 04:01:12 compute-0 systemd[1]: libpod-e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f.scope: Deactivated successfully.
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.279456048 +0000 UTC m=+0.111047974 container attach e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:01:12 compute-0 conmon[237086]: conmon e99ebfd5f992f15220b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f.scope/container/memory.events
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.28102118 +0000 UTC m=+0.112613146 container died e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.188698397 +0000 UTC m=+0.020290343 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-76fdd1de8c549f803b35b9f5d1cd645358e3835b3fe1c883dc06316d0fc29465-merged.mount: Deactivated successfully.
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:12 compute-0 podman[237069]: 2025-12-13 04:01:12.327566987 +0000 UTC m=+0.159158913 container remove e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_ishizaka, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:01:12 compute-0 systemd[1]: libpod-conmon-e99ebfd5f992f15220b68a6b82d0c299ecf5437c9ecab737fc77e54e315c817f.scope: Deactivated successfully.
Dec 13 04:01:12 compute-0 podman[237108]: 2025-12-13 04:01:12.474077744 +0000 UTC m=+0.039159957 container create 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:01:12 compute-0 systemd[1]: Started libpod-conmon-569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8.scope.
Dec 13 04:01:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f90b7ff9d92c6688905bed6ad8cac064fc23438836052af59b367ee6b108a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f90b7ff9d92c6688905bed6ad8cac064fc23438836052af59b367ee6b108a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f90b7ff9d92c6688905bed6ad8cac064fc23438836052af59b367ee6b108a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f90b7ff9d92c6688905bed6ad8cac064fc23438836052af59b367ee6b108a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:12 compute-0 podman[237108]: 2025-12-13 04:01:12.552127318 +0000 UTC m=+0.117209551 container init 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:01:12 compute-0 podman[237108]: 2025-12-13 04:01:12.457159343 +0000 UTC m=+0.022241566 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:01:12 compute-0 podman[237108]: 2025-12-13 04:01:12.559105058 +0000 UTC m=+0.124187261 container start 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:01:12 compute-0 podman[237108]: 2025-12-13 04:01:12.562354427 +0000 UTC m=+0.127436640 container attach 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:01:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:13 compute-0 lvm[237202]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:01:13 compute-0 lvm[237202]: VG ceph_vg0 finished
Dec 13 04:01:13 compute-0 lvm[237203]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:01:13 compute-0 lvm[237203]: VG ceph_vg1 finished
Dec 13 04:01:13 compute-0 lvm[237205]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:01:13 compute-0 lvm[237205]: VG ceph_vg2 finished
Dec 13 04:01:13 compute-0 nervous_hellman[237124]: {}
Dec 13 04:01:13 compute-0 systemd[1]: libpod-569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8.scope: Deactivated successfully.
Dec 13 04:01:13 compute-0 systemd[1]: libpod-569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8.scope: Consumed 1.222s CPU time.
Dec 13 04:01:13 compute-0 podman[237208]: 2025-12-13 04:01:13.368502586 +0000 UTC m=+0.028674312 container died 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:01:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-94f90b7ff9d92c6688905bed6ad8cac064fc23438836052af59b367ee6b108a3-merged.mount: Deactivated successfully.
Dec 13 04:01:13 compute-0 podman[237208]: 2025-12-13 04:01:13.511926418 +0000 UTC m=+0.172098134 container remove 569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_hellman, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:01:13 compute-0 systemd[1]: libpod-conmon-569d6ebfcd6a085de6dca86787669da0ef09ec97ba4ed9baf1c5be45d28cbdd8.scope: Deactivated successfully.
Dec 13 04:01:13 compute-0 sudo[237032]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:01:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:01:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:13 compute-0 sudo[237223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:01:13 compute-0 sudo[237223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:01:13 compute-0 sudo[237223]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:14 compute-0 ceph-mon[75071]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:01:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:15 compute-0 sudo[237373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojjwxcinifhkirweoypmprlcpkupfowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598474.901261-1208-112914861087886/AnsiballZ_getent.py'
Dec 13 04:01:15 compute-0 sudo[237373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:15 compute-0 python3.9[237375]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec 13 04:01:15 compute-0 sudo[237373]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:16 compute-0 sudo[237526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acrxrtezogiwinhlpnzzxvimqxooudgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598475.692036-1216-112302754395305/AnsiballZ_group.py'
Dec 13 04:01:16 compute-0 sudo[237526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:16 compute-0 ceph-mon[75071]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:16 compute-0 python3.9[237528]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec 13 04:01:16 compute-0 groupadd[237529]: group added to /etc/group: name=nova, GID=42436
Dec 13 04:01:16 compute-0 groupadd[237529]: group added to /etc/gshadow: name=nova
Dec 13 04:01:16 compute-0 groupadd[237529]: new group: name=nova, GID=42436
Dec 13 04:01:16 compute-0 sudo[237526]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:17 compute-0 sudo[237684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jplhocqglfnrsumjiisztihxrackqxft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598476.5635304-1224-107654628029935/AnsiballZ_user.py'
Dec 13 04:01:17 compute-0 sudo[237684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:17 compute-0 python3.9[237686]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec 13 04:01:17 compute-0 useradd[237688]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Dec 13 04:01:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:17 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:17 compute-0 useradd[237688]: add 'nova' to group 'libvirt'
Dec 13 04:01:17 compute-0 useradd[237688]: add 'nova' to shadow group 'libvirt'
Dec 13 04:01:17 compute-0 sudo[237684]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:18 compute-0 sshd-session[237720]: Accepted publickey for zuul from 192.168.122.30 port 43786 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 04:01:18 compute-0 systemd-logind[796]: New session 51 of user zuul.
Dec 13 04:01:18 compute-0 systemd[1]: Started Session 51 of User zuul.
Dec 13 04:01:18 compute-0 sshd-session[237720]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 04:01:18 compute-0 ceph-mon[75071]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:18 compute-0 sshd-session[237723]: Received disconnect from 192.168.122.30 port 43786:11: disconnected by user
Dec 13 04:01:18 compute-0 sshd-session[237723]: Disconnected from user zuul 192.168.122.30 port 43786
Dec 13 04:01:18 compute-0 sshd-session[237720]: pam_unix(sshd:session): session closed for user zuul
Dec 13 04:01:18 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Dec 13 04:01:18 compute-0 systemd-logind[796]: Session 51 logged out. Waiting for processes to exit.
Dec 13 04:01:18 compute-0 systemd-logind[796]: Removed session 51.
Dec 13 04:01:18 compute-0 python3.9[237873]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:19 compute-0 python3.9[237994]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598478.503363-1249-27179519759612/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:20 compute-0 python3.9[238144]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:20 compute-0 ceph-mon[75071]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:20 compute-0 python3.9[238220]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:21 compute-0 python3.9[238370]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:21 compute-0 python3.9[238491]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598480.7284803-1249-3638798571108/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:21 compute-0 ceph-mon[75071]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:22 compute-0 python3.9[238641]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:23 compute-0 python3.9[238762]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598481.8653653-1249-48654750241335/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:23 compute-0 podman[238886]: 2025-12-13 04:01:23.549467504 +0000 UTC m=+0.065746961 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:01:23 compute-0 python3.9[238923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:24 compute-0 ceph-mon[75071]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:24 compute-0 python3.9[239053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598483.2404375-1249-143916501126965/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:24 compute-0 python3.9[239203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:25 compute-0 python3.9[239324]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598484.3724031-1249-103082161194502/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:25 compute-0 sudo[239474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvuhrzxiopsxnnveunawbywcmajdhkfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598485.6311665-1332-173325645857661/AnsiballZ_file.py'
Dec 13 04:01:25 compute-0 sudo[239474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:26 compute-0 python3.9[239476]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:01:26 compute-0 sudo[239474]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:26 compute-0 ceph-mon[75071]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:26 compute-0 sudo[239626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpkxrbgshmwpebyvdajwkwiscclmovxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598486.299521-1340-84004892781013/AnsiballZ_copy.py'
Dec 13 04:01:26 compute-0 sudo[239626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:26 compute-0 python3.9[239628]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:01:26 compute-0 sudo[239626]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:27 compute-0 sudo[239778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwpsgmxwisxqihkobwzfmcefxjftwlct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598486.9235356-1348-168089570080270/AnsiballZ_stat.py'
Dec 13 04:01:27 compute-0 sudo[239778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:27 compute-0 python3.9[239780]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:27 compute-0 sudo[239778]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:27 compute-0 sudo[239930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uisvkhtcxvfltkmzrnejpicqvtpdtbos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598487.5190766-1356-85824422252562/AnsiballZ_stat.py'
Dec 13 04:01:27 compute-0 sudo[239930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:27 compute-0 python3.9[239932]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:27 compute-0 sudo[239930]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:28 compute-0 ceph-mon[75071]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:28 compute-0 sudo[240053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpshgxgxwcaaagwfbapkbonhggukwudm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598487.5190766-1356-85824422252562/AnsiballZ_copy.py'
Dec 13 04:01:28 compute-0 sudo[240053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:28 compute-0 python3.9[240055]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765598487.5190766-1356-85824422252562/.source _original_basename=.4p48q8fx follow=False checksum=60a0b178bf8a4f7c233b6fb582d8e61860a0efbe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec 13 04:01:28 compute-0 sudo[240053]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:29 compute-0 python3.9[240207]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:29 compute-0 python3.9[240359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:30 compute-0 ceph-mon[75071]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:30 compute-0 python3.9[240480]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598489.3871524-1382-231333743780707/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:30 compute-0 python3.9[240630]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec 13 04:01:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:31 compute-0 python3.9[240751]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765598490.5137708-1397-60962248254348/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec 13 04:01:32 compute-0 sudo[240901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouocrjwwetimgyuwsehjoqswmxedfsnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598491.7739174-1414-196774732254524/AnsiballZ_container_config_data.py'
Dec 13 04:01:32 compute-0 sudo[240901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:32 compute-0 ceph-mon[75071]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:32 compute-0 python3.9[240903]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec 13 04:01:32 compute-0 sudo[240901]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:32 compute-0 sudo[241053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrwukojnbholoomholoohnhdhhrrgjrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598492.4806316-1423-238401281853160/AnsiballZ_container_config_hash.py'
Dec 13 04:01:32 compute-0 sudo[241053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:32 compute-0 python3.9[241055]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 04:01:32 compute-0 sudo[241053]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:33 compute-0 sudo[241219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvxcyetxsxpstuymkgqdxqdoiqhlkuwr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765598493.2618551-1433-227665649774205/AnsiballZ_edpm_container_manage.py'
Dec 13 04:01:33 compute-0 sudo[241219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:33 compute-0 podman[241179]: 2025-12-13 04:01:33.560874818 +0000 UTC m=+0.082934919 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:01:33 compute-0 python3[241225]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 04:01:34 compute-0 ceph-mon[75071]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:01:35.074 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:01:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:01:35.075 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:01:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:01:35.075 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:01:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:36 compute-0 ceph-mon[75071]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:37 compute-0 ceph-mon[75071]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:01:40
Dec 13 04:01:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:01:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:01:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'default.rgw.meta', 'volumes', 'images']
Dec 13 04:01:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:01:40 compute-0 ceph-mon[75071]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:01:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:01:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:43 compute-0 ceph-mon[75071]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:44 compute-0 ceph-mon[75071]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:44 compute-0 podman[241303]: 2025-12-13 04:01:44.484910592 +0000 UTC m=+3.626659793 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:01:44 compute-0 podman[241246]: 2025-12-13 04:01:44.542695168 +0000 UTC m=+10.688450618 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 13 04:01:44 compute-0 podman[241348]: 2025-12-13 04:01:44.667550253 +0000 UTC m=+0.044866342 container create acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.license=GPLv2, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251202)
Dec 13 04:01:44 compute-0 podman[241348]: 2025-12-13 04:01:44.643463342 +0000 UTC m=+0.020779461 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 13 04:01:44 compute-0 python3[241225]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec 13 04:01:44 compute-0 sudo[241219]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:45 compute-0 sudo[241536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwngmpopkzrbxydzqrjasprqvlkepzik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598504.9348884-1441-260718301759234/AnsiballZ_stat.py'
Dec 13 04:01:45 compute-0 sudo[241536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:45 compute-0 python3.9[241538]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:45 compute-0 sudo[241536]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:46 compute-0 sudo[241690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucixzvsmqymbxrifgnukvhlustekblja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598505.7887259-1453-920707936370/AnsiballZ_container_config_data.py'
Dec 13 04:01:46 compute-0 sudo[241690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:46 compute-0 python3.9[241692]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec 13 04:01:46 compute-0 sudo[241690]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:46 compute-0 ceph-mon[75071]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:46 compute-0 sudo[241842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jltegbwvodbahmunazbaawuoovgrdvbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598506.5188267-1462-257334175742617/AnsiballZ_container_config_hash.py'
Dec 13 04:01:46 compute-0 sudo[241842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:46 compute-0 python3.9[241844]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec 13 04:01:47 compute-0 sudo[241842]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:47 compute-0 sudo[241994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-accrpdqnzepghxuqmkgcimyvhksyceih ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1765598507.29114-1472-143851249985046/AnsiballZ_edpm_container_manage.py'
Dec 13 04:01:47 compute-0 sudo[241994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:47 compute-0 python3[241996]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec 13 04:01:47 compute-0 podman[242033]: 2025-12-13 04:01:47.996170447 +0000 UTC m=+0.051421632 container create f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=nova_compute)
Dec 13 04:01:47 compute-0 podman[242033]: 2025-12-13 04:01:47.969607908 +0000 UTC m=+0.024859143 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec 13 04:01:48 compute-0 python3[241996]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec 13 04:01:48 compute-0 sudo[241994]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:48 compute-0 ceph-mon[75071]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:48 compute-0 sudo[242221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtlrgkzbwrkszyottoorfxoippcsxrep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598508.2870116-1480-184717314592519/AnsiballZ_stat.py'
Dec 13 04:01:48 compute-0 sudo[242221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:48 compute-0 python3.9[242223]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:48 compute-0 sudo[242221]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:49 compute-0 sudo[242375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkufcrtyyntstscqtvrlydmphzxjazij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598509.0193448-1489-213882492932605/AnsiballZ_file.py'
Dec 13 04:01:49 compute-0 sudo[242375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:49 compute-0 python3.9[242377]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:01:49 compute-0 sudo[242375]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:49 compute-0 sudo[242526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqxnpkkoejdmmgepadrbzcvdmlcjwucy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598509.5431628-1489-273965281561190/AnsiballZ_copy.py'
Dec 13 04:01:49 compute-0 sudo[242526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:50 compute-0 python3.9[242528]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765598509.5431628-1489-273965281561190/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec 13 04:01:50 compute-0 sudo[242526]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:50 compute-0 sudo[242602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djkskqqbbttvqedfnjvxkpiexpgjgnvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598509.5431628-1489-273965281561190/AnsiballZ_systemd.py'
Dec 13 04:01:50 compute-0 sudo[242602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:50 compute-0 ceph-mon[75071]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:50 compute-0 python3.9[242604]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec 13 04:01:50 compute-0 systemd[1]: Reloading.
Dec 13 04:01:50 compute-0 systemd-rc-local-generator[242632]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:01:50 compute-0 systemd-sysv-generator[242635]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:01:50 compute-0 sudo[242602]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:51 compute-0 sudo[242713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rifcjochikloxndqjmmyezmhmmujfnoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598509.5431628-1489-273965281561190/AnsiballZ_systemd.py'
Dec 13 04:01:51 compute-0 sudo[242713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:51 compute-0 python3.9[242715]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec 13 04:01:51 compute-0 systemd[1]: Reloading.
Dec 13 04:01:51 compute-0 ceph-mon[75071]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:51 compute-0 systemd-sysv-generator[242748]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec 13 04:01:51 compute-0 systemd-rc-local-generator[242745]: /etc/rc.d/rc.local is not marked executable, skipping.
Dec 13 04:01:51 compute-0 systemd[1]: Starting nova_compute container...
Dec 13 04:01:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:51 compute-0 podman[242755]: 2025-12-13 04:01:51.975415242 +0000 UTC m=+0.084683134 container init f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:01:51 compute-0 podman[242755]: 2025-12-13 04:01:51.98700445 +0000 UTC m=+0.096272322 container start f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:01:51 compute-0 podman[242755]: nova_compute
Dec 13 04:01:51 compute-0 nova_compute[242770]: + sudo -E kolla_set_configs
Dec 13 04:01:51 compute-0 systemd[1]: Started nova_compute container.
Dec 13 04:01:52 compute-0 sudo[242713]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Validating config file
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying service configuration files
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Deleting /etc/ceph
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Creating directory /etc/ceph
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Writing out command to execute
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:01:52 compute-0 nova_compute[242770]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 04:01:52 compute-0 nova_compute[242770]: ++ cat /run_command
Dec 13 04:01:52 compute-0 nova_compute[242770]: + CMD=nova-compute
Dec 13 04:01:52 compute-0 nova_compute[242770]: + ARGS=
Dec 13 04:01:52 compute-0 nova_compute[242770]: + sudo kolla_copy_cacerts
Dec 13 04:01:52 compute-0 nova_compute[242770]: + [[ ! -n '' ]]
Dec 13 04:01:52 compute-0 nova_compute[242770]: + . kolla_extend_start
Dec 13 04:01:52 compute-0 nova_compute[242770]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 04:01:52 compute-0 nova_compute[242770]: Running command: 'nova-compute'
Dec 13 04:01:52 compute-0 nova_compute[242770]: + umask 0022
Dec 13 04:01:52 compute-0 nova_compute[242770]: + exec nova-compute
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:01:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:01:52 compute-0 python3.9[242931]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:53 compute-0 python3.9[243082]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:53 compute-0 podman[243147]: 2025-12-13 04:01:53.902414471 +0000 UTC m=+0.052196794 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:01:54 compute-0 python3.9[243252]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.300 242774 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.300 242774 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.300 242774 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.301 242774 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 13 04:01:54 compute-0 ceph-mon[75071]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.450 242774 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.474 242774 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:01:54 compute-0 nova_compute[242770]: 2025-12-13 04:01:54.474 242774 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 13 04:01:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:01:55 compute-0 sudo[243406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcbsjohlfqwzpxyslccwsshuezcqnek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598514.5377955-1549-126711984502659/AnsiballZ_podman_container.py'
Dec 13 04:01:55 compute-0 sudo[243406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.092 242774 INFO nova.virt.driver [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 13 04:01:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.221 242774 INFO nova.compute.provider_config [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.231 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.232 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.232 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.233 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.233 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.233 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.233 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.233 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.234 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.234 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.234 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.234 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.234 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.235 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.235 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.235 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.235 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.235 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.236 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.236 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.236 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.236 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.236 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.237 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.237 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.237 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.237 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.237 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.238 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.238 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.238 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.238 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.238 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.239 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.239 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.239 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.239 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.239 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.240 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.240 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.240 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.241 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.241 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.241 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.241 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.241 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.242 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.242 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.242 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.242 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.242 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.243 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.243 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.243 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.243 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.244 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.244 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.244 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.245 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.245 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.245 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.245 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.246 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.247 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.247 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.247 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.247 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.247 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.248 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.248 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.248 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.248 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.248 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.249 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.249 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.249 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.249 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.249 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.250 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.250 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.250 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.250 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.250 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.251 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.251 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.251 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.251 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.251 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.252 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.252 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.252 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.252 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.253 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.253 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.253 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.253 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.253 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.254 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.255 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.255 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.255 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.255 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.255 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.256 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.256 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.256 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.256 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.256 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.257 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.257 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.257 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.257 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.257 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.258 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.258 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.258 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.258 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.258 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.259 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.259 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.259 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.259 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.259 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.260 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.260 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.260 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.260 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.260 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.261 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.261 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.261 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.261 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.261 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.262 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.262 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.262 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.262 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.262 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.263 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.263 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.263 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.263 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.264 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.264 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.264 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.264 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.264 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.265 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.265 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.265 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.265 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.265 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.266 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.266 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.266 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.266 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.267 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.267 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.267 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.267 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.267 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.268 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.268 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.268 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.268 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.268 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.269 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.269 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.269 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.269 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.269 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.270 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.270 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.270 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.270 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.271 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.271 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.271 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.271 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.271 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.272 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.272 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.272 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.272 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.272 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.273 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.273 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.273 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.273 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.273 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.274 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.274 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.274 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.275 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.275 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.275 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.275 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.275 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.276 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.276 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.276 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.276 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.276 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.277 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.277 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.277 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.277 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.277 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.278 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.278 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.278 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.278 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.278 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.279 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.279 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.279 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.279 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.279 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.280 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.280 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.280 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.280 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.280 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.281 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.281 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.281 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.281 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.281 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.282 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.282 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.282 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.282 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.282 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.283 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.283 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.283 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.283 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.283 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.284 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.284 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.284 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.284 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.284 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.285 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.285 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.285 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.285 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.285 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.286 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.286 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.286 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.286 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.286 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.287 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.287 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.287 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.287 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.287 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.288 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.288 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.288 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.288 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.288 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.289 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.289 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.289 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.289 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.289 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.290 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.290 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.290 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.290 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.290 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.291 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.291 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.291 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.291 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.292 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.293 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.293 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.293 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.293 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.293 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.294 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.294 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.294 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.294 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.294 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.295 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.295 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.295 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.295 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.296 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.296 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.296 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.296 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.296 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.297 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.297 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.297 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.297 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.297 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.298 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.299 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.299 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.299 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.299 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.299 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.300 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.300 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.300 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.300 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.300 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.301 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.301 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.301 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.301 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.301 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.302 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.302 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.302 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.302 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.302 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.303 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.303 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.303 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.303 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.303 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 python3.9[243408]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.304 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.305 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.305 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.305 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.305 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.305 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.306 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.307 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.308 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.309 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.310 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.311 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.312 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.313 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.314 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.315 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.316 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.317 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.318 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.319 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.320 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.320 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.320 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.320 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.320 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.321 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.322 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.323 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.324 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.325 242774 WARNING oslo_config.cfg [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 04:01:55 compute-0 nova_compute[242770]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 04:01:55 compute-0 nova_compute[242770]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 04:01:55 compute-0 nova_compute[242770]: and ``live_migration_inbound_addr`` respectively.
Dec 13 04:01:55 compute-0 nova_compute[242770]: ).  Its value may be silently ignored in the future.
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.326 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.327 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rbd_secret_uuid        = 437a9f04-06b7-56e3-8a4b-f52a1199dd32 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.328 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.329 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.330 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.331 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.332 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.333 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.334 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.335 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.336 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.337 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.338 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.339 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.340 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.341 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.342 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.343 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.344 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.345 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.346 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.347 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.348 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.349 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.350 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.351 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.352 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.352 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.352 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.352 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.352 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.353 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.354 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.355 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.355 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.355 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.355 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.355 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.356 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.357 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.358 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.359 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.360 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.361 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.362 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.363 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.363 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.363 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.363 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.363 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.364 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.364 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.364 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.364 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.364 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.365 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.366 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.367 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.368 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.369 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.369 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.369 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.369 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.369 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.370 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.371 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.371 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.371 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.371 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.371 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.372 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.372 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.372 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.372 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.372 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.373 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.374 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.375 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.376 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.377 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.378 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.379 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.380 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.381 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.382 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.383 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.384 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.385 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.386 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.387 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.388 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.389 242774 DEBUG oslo_service.service [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.391 242774 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.403 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.404 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.404 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.404 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 13 04:01:55 compute-0 sudo[243406]: pam_unix(sudo:session): session closed for user root
Dec 13 04:01:55 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Dec 13 04:01:55 compute-0 systemd[1]: Started libvirt QEMU daemon.
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.495 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f609ec89eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.498 242774 DEBUG nova.virt.libvirt.host [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f609ec89eb0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.499 242774 INFO nova.virt.libvirt.driver [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Connection event '1' reason 'None'
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.527 242774 WARNING nova.virt.libvirt.driver [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 04:01:55 compute-0 nova_compute[242770]: 2025-12-13 04:01:55.527 242774 DEBUG nova.virt.libvirt.volume.mount [None req-d7f3a5b0-4364-4993-9adb-aeab765fe703 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 13 04:01:55 compute-0 sudo[243630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyexushdwhqgdclndkhzmygwabksgvgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598515.581888-1557-1738258553112/AnsiballZ_systemd.py'
Dec 13 04:01:55 compute-0 sudo[243630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:01:56 compute-0 python3.9[243632]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec 13 04:01:56 compute-0 systemd[1]: Stopping nova_compute container...
Dec 13 04:01:56 compute-0 nova_compute[242770]: 2025-12-13 04:01:56.211 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:01:56 compute-0 nova_compute[242770]: 2025-12-13 04:01:56.211 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:01:56 compute-0 nova_compute[242770]: 2025-12-13 04:01:56.212 242774 DEBUG oslo_concurrency.lockutils [None req-2d9029af-7ac8-49e3-b8e8-f9837ac71260 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:01:56 compute-0 ceph-mon[75071]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:56 compute-0 virtqemud[243450]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec 13 04:01:56 compute-0 virtqemud[243450]: hostname: compute-0
Dec 13 04:01:56 compute-0 virtqemud[243450]: End of file while reading data: Input/output error
Dec 13 04:01:56 compute-0 systemd[1]: libpod-f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c.scope: Deactivated successfully.
Dec 13 04:01:56 compute-0 systemd[1]: libpod-f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c.scope: Consumed 3.199s CPU time.
Dec 13 04:01:56 compute-0 podman[243644]: 2025-12-13 04:01:56.732331722 +0000 UTC m=+0.558619127 container died f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:01:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:57 compute-0 ceph-mon[75071]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c-userdata-shm.mount: Deactivated successfully.
Dec 13 04:01:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528-merged.mount: Deactivated successfully.
Dec 13 04:01:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:01:59 compute-0 podman[243644]: 2025-12-13 04:01:59.382212175 +0000 UTC m=+3.208499580 container cleanup f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2)
Dec 13 04:01:59 compute-0 podman[243644]: nova_compute
Dec 13 04:01:59 compute-0 podman[243675]: nova_compute
Dec 13 04:01:59 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec 13 04:01:59 compute-0 systemd[1]: Stopped nova_compute container.
Dec 13 04:01:59 compute-0 systemd[1]: Starting nova_compute container...
Dec 13 04:01:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8ed0d4f4ca743b0b6b8b55823560d3d722d734a443a4559a09412cad8b61528/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec 13 04:01:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:00 compute-0 ceph-mon[75071]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:00 compute-0 podman[243688]: 2025-12-13 04:02:00.406485856 +0000 UTC m=+0.926066218 container init f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 04:02:00 compute-0 podman[243688]: 2025-12-13 04:02:00.412863452 +0000 UTC m=+0.932443784 container start f341d186ab544afaf3ef857b1f684e281070b476ca687f45e0c5f5bde0efb40c (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:02:00 compute-0 podman[243688]: nova_compute
Dec 13 04:02:00 compute-0 nova_compute[243704]: + sudo -E kolla_set_configs
Dec 13 04:02:00 compute-0 systemd[1]: Started nova_compute container.
Dec 13 04:02:00 compute-0 sudo[243630]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Validating config file
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying service configuration files
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /etc/ceph
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Creating directory /etc/ceph
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/ceph
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Writing out command to execute
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:02:00 compute-0 nova_compute[243704]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec 13 04:02:00 compute-0 nova_compute[243704]: ++ cat /run_command
Dec 13 04:02:00 compute-0 nova_compute[243704]: + CMD=nova-compute
Dec 13 04:02:00 compute-0 nova_compute[243704]: + ARGS=
Dec 13 04:02:00 compute-0 nova_compute[243704]: + sudo kolla_copy_cacerts
Dec 13 04:02:00 compute-0 nova_compute[243704]: + [[ ! -n '' ]]
Dec 13 04:02:00 compute-0 nova_compute[243704]: + . kolla_extend_start
Dec 13 04:02:00 compute-0 nova_compute[243704]: + echo 'Running command: '\''nova-compute'\'''
Dec 13 04:02:00 compute-0 nova_compute[243704]: Running command: 'nova-compute'
Dec 13 04:02:00 compute-0 nova_compute[243704]: + umask 0022
Dec 13 04:02:00 compute-0 nova_compute[243704]: + exec nova-compute
Dec 13 04:02:00 compute-0 sudo[243865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igubfrizqmweaaafcgupwewfczvpbged ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1765598520.6449342-1566-168627365175426/AnsiballZ_podman_container.py'
Dec 13 04:02:00 compute-0 sudo[243865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:02:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:01 compute-0 python3.9[243867]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec 13 04:02:01 compute-0 systemd[1]: Started libpod-conmon-acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700.scope.
Dec 13 04:02:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f50b7be1640d646b12abb60fe81d812981d917f45f07cc3c63a83ae336802b/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f50b7be1640d646b12abb60fe81d812981d917f45f07cc3c63a83ae336802b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88f50b7be1640d646b12abb60fe81d812981d917f45f07cc3c63a83ae336802b/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:01 compute-0 podman[243893]: 2025-12-13 04:02:01.360407688 +0000 UTC m=+0.115778157 container init acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:02:01 compute-0 podman[243893]: 2025-12-13 04:02:01.369124807 +0000 UTC m=+0.124495246 container start acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 04:02:01 compute-0 python3.9[243867]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Applying nova statedir ownership
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec 13 04:02:01 compute-0 nova_compute_init[243915]: INFO:nova_statedir:Nova statedir ownership complete
Dec 13 04:02:01 compute-0 systemd[1]: libpod-acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700.scope: Deactivated successfully.
Dec 13 04:02:01 compute-0 podman[243931]: 2025-12-13 04:02:01.460247228 +0000 UTC m=+0.022733245 container died acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec 13 04:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700-userdata-shm.mount: Deactivated successfully.
Dec 13 04:02:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-88f50b7be1640d646b12abb60fe81d812981d917f45f07cc3c63a83ae336802b-merged.mount: Deactivated successfully.
Dec 13 04:02:01 compute-0 podman[243931]: 2025-12-13 04:02:01.493696736 +0000 UTC m=+0.056182723 container cleanup acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:02:01 compute-0 sudo[243865]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:01 compute-0 systemd[1]: libpod-conmon-acc519915f8a004a4e36fba44f681a01b7645175175ce793a8ac5e3552e87700.scope: Deactivated successfully.
Dec 13 04:02:01 compute-0 sshd-session[214296]: Connection closed by 192.168.122.30 port 50234
Dec 13 04:02:01 compute-0 sshd-session[214293]: pam_unix(sshd:session): session closed for user zuul
Dec 13 04:02:01 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Dec 13 04:02:01 compute-0 systemd[1]: session-50.scope: Consumed 2min 16.002s CPU time.
Dec 13 04:02:01 compute-0 systemd-logind[796]: Session 50 logged out. Waiting for processes to exit.
Dec 13 04:02:01 compute-0 systemd-logind[796]: Removed session 50.
Dec 13 04:02:02 compute-0 ceph-mon[75071]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.663 243708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.664 243708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.664 243708 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.664 243708 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.842 243708 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.866 243708 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:02:02 compute-0 nova_compute[243704]: 2025-12-13 04:02:02.866 243708 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Dec 13 04:02:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.341 243708 INFO nova.virt.driver [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.439 243708 INFO nova.compute.provider_config [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.450 243708 DEBUG oslo_concurrency.lockutils [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.450 243708 DEBUG oslo_concurrency.lockutils [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.450 243708 DEBUG oslo_concurrency.lockutils [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.450 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.450 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.451 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.452 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.453 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.454 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.455 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.456 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.456 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.456 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.456 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.456 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.457 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.458 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.459 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.460 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.461 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.462 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.463 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.464 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.465 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.466 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.467 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.468 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.469 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.469 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.469 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.469 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.469 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.470 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.471 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.472 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.473 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.474 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.475 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.476 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.477 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.478 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.479 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.480 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.481 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.482 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.483 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.484 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.485 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.486 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.487 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.488 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.489 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.490 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.491 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.492 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.493 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.494 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.495 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.496 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.497 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.498 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.499 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.500 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.501 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.502 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.502 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.502 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.502 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.502 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.503 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.504 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.505 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.506 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.507 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.508 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.509 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.510 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.511 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.512 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.513 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.514 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.515 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.516 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.517 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.518 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.519 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.520 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.521 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.522 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 WARNING oslo_config.cfg [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec 13 04:02:03 compute-0 nova_compute[243704]: live_migration_uri is deprecated for removal in favor of two other options that
Dec 13 04:02:03 compute-0 nova_compute[243704]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec 13 04:02:03 compute-0 nova_compute[243704]: and ``live_migration_inbound_addr`` respectively.
Dec 13 04:02:03 compute-0 nova_compute[243704]: ).  Its value may be silently ignored in the future.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.523 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.524 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.525 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rbd_secret_uuid        = 437a9f04-06b7-56e3-8a4b-f52a1199dd32 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.526 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.527 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.528 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.528 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.528 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.528 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.528 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.529 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.529 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.529 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.529 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.530 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.531 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.531 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.531 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.531 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.531 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.532 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.533 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.534 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.535 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.536 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.537 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.538 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.538 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.538 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.538 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.538 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.539 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.539 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.539 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.539 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.540 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.541 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.542 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.542 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.542 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.542 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.542 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.543 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.544 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.544 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.544 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.544 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.544 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.545 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.545 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.545 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.545 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.545 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.546 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.547 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.547 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.547 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.547 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.547 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.548 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.548 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.548 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.548 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.548 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.549 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.550 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.550 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.550 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.550 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.550 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.551 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.552 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.553 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.554 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.555 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.556 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.557 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.558 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.559 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.560 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.561 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.562 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.563 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.564 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.564 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.564 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.564 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.565 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.566 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.567 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.568 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.569 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.569 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.569 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.569 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.570 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.571 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.572 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.573 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.574 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.575 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.576 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.577 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.578 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.579 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.580 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.581 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.582 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.582 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.582 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.582 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.582 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.583 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.583 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.583 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.583 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.583 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.584 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.584 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.584 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.584 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.585 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.585 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.585 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.585 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.585 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.586 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.587 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.588 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.589 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.590 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.591 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.592 243708 DEBUG oslo_service.service [None req-f2187728-9cb3-44d5-b822-f8a63a5591bb - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.593 243708 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.604 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.605 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.605 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.606 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Dec 13 04:02:03 compute-0 ceph-mon[75071]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.644 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f67517fdd90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.647 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f67517fdd90> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.648 243708 INFO nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Connection event '1' reason 'None'
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.654 243708 INFO nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Libvirt host capabilities <capabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]: 
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <host>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <uuid>90cce6d2-aa09-4bc1-a87e-fb31e9108c78</uuid>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <arch>x86_64</arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model>EPYC-Rome-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <vendor>AMD</vendor>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <microcode version='16777317'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <signature family='23' model='49' stepping='0'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <maxphysaddr mode='emulate' bits='40'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='x2apic'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='tsc-deadline'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='osxsave'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='hypervisor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='tsc_adjust'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='spec-ctrl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='stibp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='arch-capabilities'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='cmp_legacy'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='topoext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='virt-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='lbrv'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='tsc-scale'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='vmcb-clean'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='pause-filter'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='pfthreshold'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='svme-addr-chk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='rdctl-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='skip-l1dfl-vmentry'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='mds-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature name='pschange-mc-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <pages unit='KiB' size='4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <pages unit='KiB' size='2048'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <pages unit='KiB' size='1048576'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <power_management>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <suspend_mem/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </power_management>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <iommu support='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <migration_features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <live/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <uri_transports>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <uri_transport>tcp</uri_transport>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <uri_transport>rdma</uri_transport>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </uri_transports>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </migration_features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <topology>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <cells num='1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <cell id='0'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <memory unit='KiB'>7864300</memory>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <pages unit='KiB' size='4'>1966075</pages>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <pages unit='KiB' size='2048'>0</pages>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <pages unit='KiB' size='1048576'>0</pages>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <distances>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <sibling id='0' value='10'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           </distances>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           <cpus num='8'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:           </cpus>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         </cell>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </cells>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </topology>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <cache>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </cache>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <secmodel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model>selinux</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <doi>0</doi>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </secmodel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <secmodel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model>dac</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <doi>0</doi>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <baselabel type='kvm'>+107:+107</baselabel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <baselabel type='qemu'>+107:+107</baselabel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </secmodel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </host>
Dec 13 04:02:03 compute-0 nova_compute[243704]: 
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <guest>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <os_type>hvm</os_type>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <arch name='i686'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <wordsize>32</wordsize>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <domain type='qemu'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <domain type='kvm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <pae/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <nonpae/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <acpi default='on' toggle='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <apic default='on' toggle='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <cpuselection/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <deviceboot/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <disksnapshot default='on' toggle='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <externalSnapshot/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </guest>
Dec 13 04:02:03 compute-0 nova_compute[243704]: 
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <guest>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <os_type>hvm</os_type>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <arch name='x86_64'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <wordsize>64</wordsize>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <domain type='qemu'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <domain type='kvm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <acpi default='on' toggle='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <apic default='on' toggle='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <cpuselection/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <deviceboot/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <disksnapshot default='on' toggle='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <externalSnapshot/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </guest>
Dec 13 04:02:03 compute-0 nova_compute[243704]: 
Dec 13 04:02:03 compute-0 nova_compute[243704]: </capabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]: 
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.660 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.669 243708 WARNING nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.670 243708 DEBUG nova.virt.libvirt.volume.mount [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.682 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec 13 04:02:03 compute-0 nova_compute[243704]: <domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <domain>kvm</domain>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <arch>i686</arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <vcpu max='4096'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <iothreads supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <os supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='firmware'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <loader supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>rom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pflash</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='readonly'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>yes</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='secure'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </loader>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </os>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-passthrough' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='hostPassthroughMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='maximum' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='maximumMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-model' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <vendor>AMD</vendor>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='x2apic'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='hypervisor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='stibp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='overflow-recov'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='succor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='amd-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lbrv'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-scale'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='flushbyasid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pause-filter'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pfthreshold'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='svme-addr-chk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='disable' name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='custom' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Dhyana-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-128'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-256'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-512'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v6'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v7'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <memoryBacking supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='sourceType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>anonymous</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>memfd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </memoryBacking>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <disk supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='diskDevice'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>disk</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cdrom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>floppy</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>lun</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>fdc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>sata</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <graphics supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vnc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egl-headless</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </graphics>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <video supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='modelType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vga</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cirrus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>none</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>bochs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ramfb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </video>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hostdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='mode'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>subsystem</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='startupPolicy'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>mandatory</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>requisite</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>optional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='subsysType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pci</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='capsType'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='pciBackend'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hostdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <rng supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>random</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <filesystem supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='driverType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>path</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>handle</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtiofs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </filesystem>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <tpm supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-tis</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-crb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emulator</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>external</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendVersion'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>2.0</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </tpm>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <redirdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </redirdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <channel supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </channel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <crypto supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </crypto>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <interface supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>passt</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <panic supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>isa</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>hyperv</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </panic>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <console supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>null</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dev</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pipe</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stdio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>udp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tcp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu-vdagent</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </console>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <gic supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <vmcoreinfo supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <genid supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backingStoreInput supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backup supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <async-teardown supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <ps2 supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sev supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sgx supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hyperv supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='features'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>relaxed</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vapic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>spinlocks</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vpindex</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>runtime</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>synic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stimer</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reset</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vendor_id</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>frequencies</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reenlightenment</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tlbflush</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ipi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>avic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emsr_bitmap</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>xmm_input</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <spinlocks>4095</spinlocks>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <stimer_direct>on</stimer_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hyperv>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <launchSecurity supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='sectype'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tdx</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </launchSecurity>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]: </domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.689 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec 13 04:02:03 compute-0 nova_compute[243704]: <domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <domain>kvm</domain>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <arch>i686</arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <vcpu max='240'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <iothreads supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <os supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='firmware'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <loader supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>rom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pflash</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='readonly'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>yes</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='secure'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </loader>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </os>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-passthrough' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='hostPassthroughMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='maximum' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='maximumMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-model' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <vendor>AMD</vendor>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='x2apic'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='hypervisor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='stibp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='overflow-recov'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='succor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='amd-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lbrv'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-scale'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='flushbyasid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pause-filter'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pfthreshold'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='svme-addr-chk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='disable' name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='custom' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Dhyana-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 podman[244005]: 2025-12-13 04:02:03.736598281 +0000 UTC m=+0.086334870 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-128'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-256'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-512'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v6'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v7'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <memoryBacking supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='sourceType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>anonymous</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>memfd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </memoryBacking>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <disk supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='diskDevice'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>disk</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cdrom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>floppy</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>lun</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ide</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>fdc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>sata</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <graphics supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vnc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egl-headless</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </graphics>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <video supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='modelType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vga</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cirrus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>none</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>bochs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ramfb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </video>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hostdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='mode'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>subsystem</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='startupPolicy'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>mandatory</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>requisite</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>optional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='subsysType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pci</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='capsType'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='pciBackend'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hostdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <rng supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>random</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <filesystem supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='driverType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>path</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>handle</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtiofs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </filesystem>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <tpm supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-tis</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-crb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emulator</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>external</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendVersion'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>2.0</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </tpm>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <redirdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </redirdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <channel supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </channel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <crypto supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </crypto>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <interface supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>passt</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <panic supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>isa</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>hyperv</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </panic>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <console supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>null</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dev</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pipe</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stdio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>udp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tcp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu-vdagent</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </console>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <gic supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <vmcoreinfo supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <genid supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backingStoreInput supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backup supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <async-teardown supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <ps2 supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sev supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sgx supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hyperv supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='features'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>relaxed</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vapic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>spinlocks</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vpindex</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>runtime</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>synic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stimer</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reset</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vendor_id</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>frequencies</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reenlightenment</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tlbflush</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ipi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>avic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emsr_bitmap</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>xmm_input</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <spinlocks>4095</spinlocks>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <stimer_direct>on</stimer_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hyperv>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <launchSecurity supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='sectype'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tdx</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </launchSecurity>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]: </domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.716 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.723 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec 13 04:02:03 compute-0 nova_compute[243704]: <domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <domain>kvm</domain>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <machine>pc-q35-rhel9.8.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <arch>x86_64</arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <vcpu max='4096'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <iothreads supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <os supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='firmware'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>efi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <loader supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>rom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pflash</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='readonly'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>yes</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='secure'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>yes</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </loader>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </os>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-passthrough' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='hostPassthroughMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='maximum' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='maximumMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-model' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <vendor>AMD</vendor>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='x2apic'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='hypervisor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='stibp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='overflow-recov'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='succor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='amd-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lbrv'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-scale'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='flushbyasid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pause-filter'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pfthreshold'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='svme-addr-chk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='disable' name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='custom' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Dhyana-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-128'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-256'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-512'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v6'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v7'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <memoryBacking supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='sourceType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>anonymous</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>memfd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </memoryBacking>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <disk supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='diskDevice'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>disk</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cdrom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>floppy</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>lun</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>fdc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>sata</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <graphics supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vnc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egl-headless</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </graphics>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <video supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='modelType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vga</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cirrus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>none</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>bochs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ramfb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </video>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hostdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='mode'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>subsystem</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='startupPolicy'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>mandatory</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>requisite</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>optional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='subsysType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pci</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='capsType'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='pciBackend'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hostdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <rng supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>random</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <filesystem supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='driverType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>path</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>handle</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtiofs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </filesystem>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <tpm supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-tis</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-crb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emulator</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>external</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendVersion'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>2.0</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </tpm>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <redirdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </redirdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <channel supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </channel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <crypto supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </crypto>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <interface supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>passt</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <panic supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>isa</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>hyperv</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </panic>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <console supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>null</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dev</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pipe</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stdio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>udp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tcp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu-vdagent</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </console>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <gic supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <vmcoreinfo supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <genid supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backingStoreInput supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backup supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <async-teardown supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <ps2 supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sev supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sgx supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hyperv supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='features'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>relaxed</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vapic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>spinlocks</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vpindex</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>runtime</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>synic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stimer</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reset</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vendor_id</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>frequencies</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reenlightenment</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tlbflush</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ipi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>avic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emsr_bitmap</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>xmm_input</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <spinlocks>4095</spinlocks>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <stimer_direct>on</stimer_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hyperv>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <launchSecurity supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='sectype'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tdx</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </launchSecurity>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]: </domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.780 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec 13 04:02:03 compute-0 nova_compute[243704]: <domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <path>/usr/libexec/qemu-kvm</path>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <domain>kvm</domain>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <machine>pc-i440fx-rhel7.6.0</machine>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <arch>x86_64</arch>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <vcpu max='240'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <iothreads supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <os supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='firmware'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <loader supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>rom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pflash</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='readonly'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>yes</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='secure'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>no</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </loader>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </os>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-passthrough' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='hostPassthroughMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='maximum' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='maximumMigratable'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>on</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>off</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='host-model' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model fallback='forbid'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <vendor>AMD</vendor>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <maxphysaddr mode='passthrough' limit='40'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='x2apic'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-deadline'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='hypervisor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc_adjust'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='spec-ctrl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='stibp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='cmp_legacy'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='overflow-recov'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='succor'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='amd-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='virt-ssbd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lbrv'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='tsc-scale'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='vmcb-clean'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='flushbyasid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pause-filter'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='pfthreshold'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='svme-addr-chk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='require' name='lfence-always-serializing'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <feature policy='disable' name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <mode name='custom' supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Broadwell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cascadelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Cooperlake-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Denverton-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Dhyana-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Genoa-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='auto-ibrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Milan-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amd-psfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='no-nested-data-bp'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='null-sel-clr-base'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='stibp-always-on'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-Rome-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='EPYC-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='GraniteRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-128'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-256'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx10-512'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='prefetchiti'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Haswell-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-noTSX'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v6'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Icelake-Server-v7'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='IvyBridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='KnightsMill-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4fmaps'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-4vnniw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512er'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512pf'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G4-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Opteron_G5-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fma4'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tbm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xop'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SapphireRapids-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='amx-tile'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-bf16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-fp16'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512-vpopcntdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bitalg'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vbmi2'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrc'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fzrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='la57'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='taa-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='tsx-ldtrk'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xfd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='SierraForest-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ifma'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-ne-convert'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx-vnni-int8'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='bus-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cmpccxadd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fbsdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='fsrs'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ibrs-all'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mcdt-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pbrsb-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='psdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='sbdr-ssdp-no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='serialize'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vaes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='vpclmulqdq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Client-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='hle'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='rtm'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Skylake-Server-v5'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512bw'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512cd'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512dq'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512f'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='avx512vl'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='invpcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pcid'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='pku'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='mpx'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v2'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v3'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='core-capability'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='split-lock-detect'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='Snowridge-v4'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='cldemote'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='erms'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='gfni'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdir64b'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='movdiri'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='xsaves'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='athlon-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='core2duo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='coreduo-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='n270-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='ss'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <blockers model='phenom-v1'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnow'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <feature name='3dnowext'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </blockers>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </mode>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <memoryBacking supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <enum name='sourceType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>anonymous</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <value>memfd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </memoryBacking>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <disk supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='diskDevice'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>disk</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cdrom</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>floppy</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>lun</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ide</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>fdc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>sata</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <graphics supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vnc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egl-headless</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </graphics>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <video supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='modelType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vga</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>cirrus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>none</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>bochs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ramfb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </video>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hostdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='mode'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>subsystem</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='startupPolicy'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>mandatory</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>requisite</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>optional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='subsysType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pci</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>scsi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='capsType'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='pciBackend'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hostdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <rng supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtio-non-transitional</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>random</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>egd</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <filesystem supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='driverType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>path</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>handle</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>virtiofs</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </filesystem>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <tpm supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-tis</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tpm-crb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emulator</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>external</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendVersion'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>2.0</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </tpm>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <redirdev supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='bus'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>usb</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </redirdev>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <channel supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </channel>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <crypto supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendModel'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>builtin</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </crypto>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <interface supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='backendType'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>default</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>passt</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <panic supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='model'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>isa</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>hyperv</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </panic>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <console supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='type'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>null</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vc</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pty</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dev</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>file</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>pipe</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stdio</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>udp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tcp</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>unix</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>qemu-vdagent</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>dbus</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </console>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   <features>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <gic supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <vmcoreinfo supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <genid supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backingStoreInput supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <backup supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <async-teardown supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <ps2 supported='yes'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sev supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <sgx supported='no'/>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <hyperv supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='features'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>relaxed</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vapic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>spinlocks</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vpindex</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>runtime</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>synic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>stimer</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reset</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>vendor_id</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>frequencies</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>reenlightenment</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tlbflush</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>ipi</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>avic</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>emsr_bitmap</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>xmm_input</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <spinlocks>4095</spinlocks>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <stimer_direct>on</stimer_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_direct>on</tlbflush_direct>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <tlbflush_extended>on</tlbflush_extended>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <vendor_id>Linux KVM Hv</vendor_id>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </defaults>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </hyperv>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     <launchSecurity supported='yes'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       <enum name='sectype'>
Dec 13 04:02:03 compute-0 nova_compute[243704]:         <value>tdx</value>
Dec 13 04:02:03 compute-0 nova_compute[243704]:       </enum>
Dec 13 04:02:03 compute-0 nova_compute[243704]:     </launchSecurity>
Dec 13 04:02:03 compute-0 nova_compute[243704]:   </features>
Dec 13 04:02:03 compute-0 nova_compute[243704]: </domainCapabilities>
Dec 13 04:02:03 compute-0 nova_compute[243704]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.837 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.837 243708 INFO nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Secure Boot support detected
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.840 243708 INFO nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.840 243708 INFO nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.849 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.879 243708 INFO nova.virt.node [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Determined node identity 36c11063-1199-4cbe-b01b-7185aae56a2a from /var/lib/nova/compute_id
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.901 243708 WARNING nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Compute nodes ['36c11063-1199-4cbe-b01b-7185aae56a2a'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.939 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.972 243708 WARNING nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.972 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.972 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.973 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.973 243708 DEBUG nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:02:03 compute-0 nova_compute[243704]: 2025-12-13 04:02:03.973 243708 DEBUG oslo_concurrency.processutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:02:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2001277441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.523 243708 DEBUG oslo_concurrency.processutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:02:04 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Dec 13 04:02:04 compute-0 systemd[1]: Started libvirt nodedev daemon.
Dec 13 04:02:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2001277441' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.830 243708 WARNING nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.832 243708 DEBUG nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5114MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.832 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.832 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.844 243708 WARNING nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] No compute node record for compute-0.ctlplane.example.com:36c11063-1199-4cbe-b01b-7185aae56a2a: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 36c11063-1199-4cbe-b01b-7185aae56a2a could not be found.
Dec 13 04:02:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.872 243708 INFO nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 36c11063-1199-4cbe-b01b-7185aae56a2a
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.959 243708 DEBUG nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:02:04 compute-0 nova_compute[243704]: 2025-12-13 04:02:04.959 243708 DEBUG nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:02:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:05 compute-0 ceph-mon[75071]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:05 compute-0 nova_compute[243704]: 2025-12-13 04:02:05.947 243708 INFO nova.scheduler.client.report [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [req-2176d522-5263-4550-8513-daa74e225b1b] Created resource provider record via placement API for resource provider with UUID 36c11063-1199-4cbe-b01b-7185aae56a2a and name compute-0.ctlplane.example.com.
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.412 243708 DEBUG oslo_concurrency.processutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:02:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:02:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4169195210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.968 243708 DEBUG oslo_concurrency.processutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.975 243708 DEBUG nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec 13 04:02:06 compute-0 nova_compute[243704]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.976 243708 INFO nova.virt.libvirt.host [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] kernel doesn't support AMD SEV
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.977 243708 DEBUG nova.compute.provider_tree [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:02:06 compute-0 nova_compute[243704]: 2025-12-13 04:02:06.977 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:02:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4169195210' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.022 243708 DEBUG nova.scheduler.client.report [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Updated inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.022 243708 DEBUG nova.compute.provider_tree [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Updating resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.023 243708 DEBUG nova.compute.provider_tree [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:02:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.122 243708 DEBUG nova.compute.provider_tree [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Updating resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.157 243708 DEBUG nova.compute.resource_tracker [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.158 243708 DEBUG oslo_concurrency.lockutils [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.159 243708 DEBUG nova.service [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.236 243708 DEBUG nova.service [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Dec 13 04:02:07 compute-0 nova_compute[243704]: 2025-12-13 04:02:07.237 243708 DEBUG nova.servicegroup.drivers.db [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Dec 13 04:02:08 compute-0 ceph-mon[75071]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:10 compute-0 ceph-mon[75071]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:12 compute-0 ceph-mon[75071]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:13 compute-0 sudo[244102]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:02:13 compute-0 sudo[244102]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:13 compute-0 sudo[244102]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:13 compute-0 sudo[244127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:02:13 compute-0 sudo[244127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:14 compute-0 sudo[244127]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:14 compute-0 sudo[244183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:02:14 compute-0 sudo[244183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:14 compute-0 sudo[244183]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:14 compute-0 sudo[244209]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:02:14 compute-0 sudo[244209]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:14 compute-0 podman[244207]: 2025-12-13 04:02:14.612825652 +0000 UTC m=+0.065171860 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.853930507 +0000 UTC m=+0.046376004 container create ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:02:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:14 compute-0 systemd[1]: Started libpod-conmon-ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b.scope.
Dec 13 04:02:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.834137604 +0000 UTC m=+0.026583121 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.932249395 +0000 UTC m=+0.124694942 container init ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.943323849 +0000 UTC m=+0.135769346 container start ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.947175325 +0000 UTC m=+0.139620912 container attach ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:02:14 compute-0 mystifying_sutherland[244281]: 167 167
Dec 13 04:02:14 compute-0 systemd[1]: libpod-ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b.scope: Deactivated successfully.
Dec 13 04:02:14 compute-0 conmon[244281]: conmon ce3f539baabfb022e22c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b.scope/container/memory.events
Dec 13 04:02:14 compute-0 podman[244265]: 2025-12-13 04:02:14.950103395 +0000 UTC m=+0.142548972 container died ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:02:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cccb91c4b8af5fa9baeaee05ba71aa4749b2a3e81aec85124eb14acb538f686-merged.mount: Deactivated successfully.
Dec 13 04:02:15 compute-0 podman[244265]: 2025-12-13 04:02:15.003264613 +0000 UTC m=+0.195710140 container remove ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_sutherland, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:02:15 compute-0 systemd[1]: libpod-conmon-ce3f539baabfb022e22cbe03712cdb9cc28b3dbee93a34a1a8c66129f482da0b.scope: Deactivated successfully.
Dec 13 04:02:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.167953232 +0000 UTC m=+0.041624583 container create fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:15 compute-0 systemd[1]: Started libpod-conmon-fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9.scope.
Dec 13 04:02:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.150075871 +0000 UTC m=+0.023747242 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.258157477 +0000 UTC m=+0.131828838 container init fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.272752327 +0000 UTC m=+0.146423678 container start fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.276895121 +0000 UTC m=+0.150566502 container attach fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:02:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:02:15 compute-0 agitated_liskov[244324]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:02:15 compute-0 agitated_liskov[244324]: --> All data devices are unavailable
Dec 13 04:02:15 compute-0 systemd[1]: libpod-fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9.scope: Deactivated successfully.
Dec 13 04:02:15 compute-0 conmon[244324]: conmon fb1242f645b920af1bab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9.scope/container/memory.events
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.923427549 +0000 UTC m=+0.797098920 container died fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-12f95fd5e44db64201044ef369b7b2ae99b1c12de28f66cb48a9c4a40b731d40-merged.mount: Deactivated successfully.
Dec 13 04:02:15 compute-0 podman[244307]: 2025-12-13 04:02:15.968925428 +0000 UTC m=+0.842596819 container remove fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=agitated_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:15 compute-0 systemd[1]: libpod-conmon-fb1242f645b920af1bab9afde0f0ee2b5969676e30237793ebe81eedf69500d9.scope: Deactivated successfully.
Dec 13 04:02:16 compute-0 sudo[244209]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:16 compute-0 sudo[244356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:02:16 compute-0 sudo[244356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:16 compute-0 sudo[244356]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:16 compute-0 sudo[244381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:02:16 compute-0 sudo[244381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.433906265 +0000 UTC m=+0.041029897 container create 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:16 compute-0 ceph-mon[75071]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:16 compute-0 systemd[1]: Started libpod-conmon-684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b.scope.
Dec 13 04:02:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.513342794 +0000 UTC m=+0.120466446 container init 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.418586834 +0000 UTC m=+0.025710496 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.520397868 +0000 UTC m=+0.127521520 container start 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:02:16 compute-0 sad_hypatia[244435]: 167 167
Dec 13 04:02:16 compute-0 systemd[1]: libpod-684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b.scope: Deactivated successfully.
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.526248218 +0000 UTC m=+0.133371880 container attach 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.526786793 +0000 UTC m=+0.133910445 container died 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-e76d5d5b573b6b1401759bcc37397106fdd16d1d22d40dd926523defb80f4925-merged.mount: Deactivated successfully.
Dec 13 04:02:16 compute-0 podman[244418]: 2025-12-13 04:02:16.557712701 +0000 UTC m=+0.164836333 container remove 684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_hypatia, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:16 compute-0 systemd[1]: libpod-conmon-684d634c3174f1af3e455fc05a3ffe16a2790ab093f1ad8fe21e8287a97b359b.scope: Deactivated successfully.
Dec 13 04:02:16 compute-0 podman[244458]: 2025-12-13 04:02:16.709616919 +0000 UTC m=+0.041212882 container create 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:02:16 compute-0 systemd[1]: Started libpod-conmon-655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e.scope.
Dec 13 04:02:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd9e50eec01d0ae0a7e019774a27247acaf5770765989fbef3b20b524b15e84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:16 compute-0 podman[244458]: 2025-12-13 04:02:16.691019189 +0000 UTC m=+0.022615122 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd9e50eec01d0ae0a7e019774a27247acaf5770765989fbef3b20b524b15e84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd9e50eec01d0ae0a7e019774a27247acaf5770765989fbef3b20b524b15e84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bcd9e50eec01d0ae0a7e019774a27247acaf5770765989fbef3b20b524b15e84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:16 compute-0 podman[244458]: 2025-12-13 04:02:16.801462679 +0000 UTC m=+0.133058672 container init 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:16 compute-0 podman[244458]: 2025-12-13 04:02:16.808933184 +0000 UTC m=+0.140529117 container start 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:02:16 compute-0 podman[244458]: 2025-12-13 04:02:16.81245473 +0000 UTC m=+0.144050683 container attach 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:02:17 compute-0 objective_faraday[244475]: {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     "0": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "devices": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "/dev/loop3"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             ],
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_name": "ceph_lv0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_size": "21470642176",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "name": "ceph_lv0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "tags": {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_name": "ceph",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.crush_device_class": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.encrypted": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.objectstore": "bluestore",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_id": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.vdo": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.with_tpm": "0"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             },
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "vg_name": "ceph_vg0"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         }
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     ],
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     "1": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "devices": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "/dev/loop4"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             ],
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_name": "ceph_lv1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_size": "21470642176",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "name": "ceph_lv1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "tags": {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_name": "ceph",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.crush_device_class": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.encrypted": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.objectstore": "bluestore",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_id": "1",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.vdo": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.with_tpm": "0"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             },
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "vg_name": "ceph_vg1"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         }
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     ],
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     "2": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "devices": [
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "/dev/loop5"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             ],
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_name": "ceph_lv2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_size": "21470642176",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "name": "ceph_lv2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "tags": {
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.cluster_name": "ceph",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.crush_device_class": "",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.encrypted": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.objectstore": "bluestore",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osd_id": "2",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.vdo": "0",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:                 "ceph.with_tpm": "0"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             },
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "type": "block",
Dec 13 04:02:17 compute-0 objective_faraday[244475]:             "vg_name": "ceph_vg2"
Dec 13 04:02:17 compute-0 objective_faraday[244475]:         }
Dec 13 04:02:17 compute-0 objective_faraday[244475]:     ]
Dec 13 04:02:17 compute-0 objective_faraday[244475]: }
Dec 13 04:02:17 compute-0 systemd[1]: libpod-655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e.scope: Deactivated successfully.
Dec 13 04:02:17 compute-0 podman[244458]: 2025-12-13 04:02:17.096918354 +0000 UTC m=+0.428514297 container died 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-bcd9e50eec01d0ae0a7e019774a27247acaf5770765989fbef3b20b524b15e84-merged.mount: Deactivated successfully.
Dec 13 04:02:17 compute-0 podman[244458]: 2025-12-13 04:02:17.137267342 +0000 UTC m=+0.468863285 container remove 655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_faraday, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:17 compute-0 systemd[1]: libpod-conmon-655c604f5d2172bdede4f0920804bcecfdd26ded403304dd2b810e874290b75e.scope: Deactivated successfully.
Dec 13 04:02:17 compute-0 sudo[244381]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:17 compute-0 sudo[244496]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:02:17 compute-0 sudo[244496]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:17 compute-0 sudo[244496]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:17 compute-0 sudo[244521]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:02:17 compute-0 sudo[244521]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.590078225 +0000 UTC m=+0.040662186 container create 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:02:17 compute-0 systemd[1]: Started libpod-conmon-9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e.scope.
Dec 13 04:02:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.571312421 +0000 UTC m=+0.021896402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.668825616 +0000 UTC m=+0.119409607 container init 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.677690659 +0000 UTC m=+0.128274620 container start 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:02:17 compute-0 boring_pascal[244576]: 167 167
Dec 13 04:02:17 compute-0 systemd[1]: libpod-9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e.scope: Deactivated successfully.
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.683072627 +0000 UTC m=+0.133656588 container attach 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.683379555 +0000 UTC m=+0.133963526 container died 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:02:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4d7ddce2c350a7d59eed64205c8de6c3c24b342d2e0871a19c8aa47e40d7d6dc-merged.mount: Deactivated successfully.
Dec 13 04:02:17 compute-0 podman[244559]: 2025-12-13 04:02:17.714224062 +0000 UTC m=+0.164808023 container remove 9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:02:17 compute-0 systemd[1]: libpod-conmon-9698bc4c77795b428b3d7c111fa93a35b77a996e460a72fdf46663451145a86e.scope: Deactivated successfully.
Dec 13 04:02:17 compute-0 podman[244601]: 2025-12-13 04:02:17.87890655 +0000 UTC m=+0.048652197 container create 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:02:17 compute-0 systemd[1]: Started libpod-conmon-1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8.scope.
Dec 13 04:02:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:02:17 compute-0 podman[244601]: 2025-12-13 04:02:17.856151935 +0000 UTC m=+0.025897602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4e36e33dd302d0d980b4b17df8837bf54781152d041cba6ad9d447a3cdf9ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4e36e33dd302d0d980b4b17df8837bf54781152d041cba6ad9d447a3cdf9ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4e36e33dd302d0d980b4b17df8837bf54781152d041cba6ad9d447a3cdf9ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d4e36e33dd302d0d980b4b17df8837bf54781152d041cba6ad9d447a3cdf9ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:02:17 compute-0 podman[244601]: 2025-12-13 04:02:17.964683953 +0000 UTC m=+0.134429600 container init 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:02:17 compute-0 podman[244601]: 2025-12-13 04:02:17.97335344 +0000 UTC m=+0.143099087 container start 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:02:17 compute-0 podman[244601]: 2025-12-13 04:02:17.976790214 +0000 UTC m=+0.146535861 container attach 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:02:18 compute-0 ceph-mon[75071]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:18 compute-0 lvm[244694]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:02:18 compute-0 lvm[244694]: VG ceph_vg0 finished
Dec 13 04:02:18 compute-0 lvm[244696]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:02:18 compute-0 lvm[244696]: VG ceph_vg1 finished
Dec 13 04:02:18 compute-0 lvm[244698]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:02:18 compute-0 lvm[244698]: VG ceph_vg2 finished
Dec 13 04:02:18 compute-0 lvm[244699]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:02:18 compute-0 lvm[244699]: VG ceph_vg0 finished
Dec 13 04:02:18 compute-0 wonderful_engelbart[244617]: {}
Dec 13 04:02:18 compute-0 systemd[1]: libpod-1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8.scope: Deactivated successfully.
Dec 13 04:02:18 compute-0 systemd[1]: libpod-1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8.scope: Consumed 1.384s CPU time.
Dec 13 04:02:18 compute-0 podman[244601]: 2025-12-13 04:02:18.863714148 +0000 UTC m=+1.033459795 container died 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d4e36e33dd302d0d980b4b17df8837bf54781152d041cba6ad9d447a3cdf9ed-merged.mount: Deactivated successfully.
Dec 13 04:02:18 compute-0 podman[244601]: 2025-12-13 04:02:18.903016207 +0000 UTC m=+1.072761854 container remove 1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:02:18 compute-0 systemd[1]: libpod-conmon-1e9f8112db4bf140a0143114a67190aa445f86a0e29945c2cc2a2802bc29f5f8.scope: Deactivated successfully.
Dec 13 04:02:18 compute-0 sudo[244521]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:02:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:02:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:19 compute-0 sudo[244712]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:02:19 compute-0 sudo[244712]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:02:19 compute-0 sudo[244712]: pam_unix(sudo:session): session closed for user root
Dec 13 04:02:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:02:19 compute-0 ceph-mon[75071]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:22 compute-0 ceph-mon[75071]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:24 compute-0 ceph-mon[75071]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:24 compute-0 podman[244737]: 2025-12-13 04:02:24.93060388 +0000 UTC m=+0.073029105 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:02:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4071457745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4071457745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4071457745' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4071457745' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1509420764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1509420764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274924676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:02:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3274924676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1509420764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1509420764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3274924676' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:02:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3274924676' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:02:28 compute-0 ceph-mon[75071]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:30 compute-0 ceph-mon[75071]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:32 compute-0 ceph-mon[75071]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:33 compute-0 ceph-mon[75071]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:33 compute-0 podman[244757]: 2025-12-13 04:02:33.958102427 +0000 UTC m=+0.101958438 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:02:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:02:35.075 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:02:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:02:35.076 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:02:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:02:35.076 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:02:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:36 compute-0 ceph-mon[75071]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:38 compute-0 ceph-mon[75071]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:38 compute-0 nova_compute[243704]: 2025-12-13 04:02:38.240 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:02:38 compute-0 nova_compute[243704]: 2025-12-13 04:02:38.255 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:02:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:40 compute-0 ceph-mon[75071]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:02:40
Dec 13 04:02:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:02:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:02:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'default.rgw.log', 'volumes', '.rgw.root', 'default.rgw.meta', 'backups', 'cephfs.cephfs.data', 'vms']
Dec 13 04:02:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:02:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:02:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:02:42 compute-0 ceph-mon[75071]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:43 compute-0 ceph-mon[75071]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:44 compute-0 podman[244785]: 2025-12-13 04:02:44.904846132 +0000 UTC m=+0.055662208 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:02:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:46 compute-0 ceph-mon[75071]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:48 compute-0 ceph-mon[75071]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:50 compute-0 ceph-mon[75071]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:02:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:02:52 compute-0 ceph-mon[75071]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:54 compute-0 ceph-mon[75071]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:02:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:55 compute-0 ceph-mon[75071]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:55 compute-0 podman[244804]: 2025-12-13 04:02:55.912710903 +0000 UTC m=+0.053633532 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 04:02:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:58 compute-0 ceph-mon[75071]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:02:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:00 compute-0 ceph-mon[75071]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:02 compute-0 ceph-mon[75071]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.878 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.879 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.879 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.879 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.928 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.929 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.929 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.929 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.930 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.930 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.930 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.930 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.930 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.983 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.983 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.983 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.984 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:03:02 compute-0 nova_compute[243704]: 2025-12-13 04:03:02.984 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:03:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873429318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.580 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.804 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.806 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5123MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.806 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.806 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.960 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:03:03 compute-0 nova_compute[243704]: 2025-12-13 04:03:03.961 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.005 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:03:04 compute-0 ceph-mon[75071]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1873429318' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:03:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2783076103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.519 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.525 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.591 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.630 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:03:04 compute-0 nova_compute[243704]: 2025-12-13 04:03:04.630 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:03:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:04 compute-0 podman[244868]: 2025-12-13 04:03:04.991293583 +0000 UTC m=+0.140774983 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:03:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2783076103' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:03:06 compute-0 ceph-mon[75071]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:08 compute-0 ceph-mon[75071]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:10 compute-0 ceph-mon[75071]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:11 compute-0 ceph-mon[75071]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:14 compute-0 ceph-mon[75071]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:03:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Cumulative writes: 3314 writes, 14K keys, 3314 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3314 writes, 3314 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1266 writes, 5526 keys, 1266 commit groups, 1.0 writes per commit group, ingest: 8.53 MB, 0.01 MB/s
                                           Interval WAL: 1266 writes, 1266 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     80.9      0.18              0.04         6    0.030       0      0       0.0       0.0
                                             L6      1/0    7.44 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.4    137.9    114.6      0.31              0.11         5    0.063     19K   2187       0.0       0.0
                                            Sum      1/0    7.44 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   3.4     87.2    102.2      0.49              0.15        11    0.045     19K   2187       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.6    107.4    109.8      0.25              0.08         6    0.042     12K   1453       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    137.9    114.6      0.31              0.11         5    0.063     19K   2187       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    116.0      0.13              0.04         5    0.025       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.014, interval 0.006
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.05 GB write, 0.04 MB/s write, 0.04 GB read, 0.04 MB/s read, 0.5 seconds
                                           Interval compaction: 0.03 GB write, 0.05 MB/s write, 0.03 GB read, 0.04 MB/s read, 0.3 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 308.00 MB usage: 1.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(85,1.30 MB,0.421281%) FilterBlock(12,63.55 KB,0.0201485%) IndexBlock(12,130.14 KB,0.0412631%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 04:03:15 compute-0 podman[244894]: 2025-12-13 04:03:15.898940526 +0000 UTC m=+0.050682241 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:03:16 compute-0 ceph-mon[75071]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:18 compute-0 ceph-mon[75071]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:19 compute-0 sudo[244913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:03:19 compute-0 sudo[244913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:19 compute-0 sudo[244913]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:19 compute-0 sudo[244938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:03:19 compute-0 sudo[244938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:19 compute-0 sudo[244938]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:19 compute-0 sudo[244993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:03:19 compute-0 sudo[244993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:19 compute-0 sudo[244993]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:19 compute-0 sudo[245018]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:03:19 compute-0 sudo[245018]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 04:03:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3924147122' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 04:03:19 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 04:03:19 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 04:03:19 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 04:03:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.031240709 +0000 UTC m=+0.044175053 container create 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:03:20 compute-0 systemd[1]: Started libpod-conmon-4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c.scope.
Dec 13 04:03:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.094676349 +0000 UTC m=+0.107610713 container init 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.10234975 +0000 UTC m=+0.115284094 container start 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.105755253 +0000 UTC m=+0.118689597 container attach 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:03:20 compute-0 bold_booth[245071]: 167 167
Dec 13 04:03:20 compute-0 systemd[1]: libpod-4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c.scope: Deactivated successfully.
Dec 13 04:03:20 compute-0 conmon[245071]: conmon 4d58ef966cd92878afaf <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c.scope/container/memory.events
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.013937645 +0000 UTC m=+0.026872009 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.109854646 +0000 UTC m=+0.122788990 container died 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:03:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-6619d4785701e4bbbae6483a36d14d6f12cd68928ab4ac814ad4f275db521a79-merged.mount: Deactivated successfully.
Dec 13 04:03:20 compute-0 podman[245055]: 2025-12-13 04:03:20.152382953 +0000 UTC m=+0.165317307 container remove 4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 04:03:20 compute-0 systemd[1]: libpod-conmon-4d58ef966cd92878afafe347457c0c81fc16bc25eb03f53aff0962fd82ed792c.scope: Deactivated successfully.
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.307088477 +0000 UTC m=+0.041922631 container create 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:20 compute-0 systemd[1]: Started libpod-conmon-8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a.scope.
Dec 13 04:03:20 compute-0 ceph-mon[75071]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:03:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:03:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3924147122' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 04:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.289964037 +0000 UTC m=+0.024798221 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.387955436 +0000 UTC m=+0.122789590 container init 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.395805822 +0000 UTC m=+0.130639976 container start 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.400155181 +0000 UTC m=+0.134989365 container attach 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:03:20 compute-0 boring_kare[245110]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:03:20 compute-0 boring_kare[245110]: --> All data devices are unavailable
Dec 13 04:03:20 compute-0 systemd[1]: libpod-8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a.scope: Deactivated successfully.
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.845136079 +0000 UTC m=+0.579970243 container died 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:03:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0f3c60392e2ac3567788e8c57a81d43f7ff30cd57694650d2f3b0595e96e6ed-merged.mount: Deactivated successfully.
Dec 13 04:03:20 compute-0 podman[245094]: 2025-12-13 04:03:20.883855161 +0000 UTC m=+0.618689315 container remove 8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:20 compute-0 systemd[1]: libpod-conmon-8bc3bc078a24c1f55d37ee885b50bc8195945b1b32d454ae65e3daa0b1476d6a.scope: Deactivated successfully.
Dec 13 04:03:20 compute-0 sudo[245018]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:21 compute-0 sudo[245144]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:03:21 compute-0 sudo[245144]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:21 compute-0 sudo[245144]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:21 compute-0 sudo[245169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:03:21 compute-0 sudo[245169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.360730355 +0000 UTC m=+0.037398627 container create 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:03:21 compute-0 ceph-mon[75071]: from='client.14340 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 04:03:21 compute-0 systemd[1]: Started libpod-conmon-87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617.scope.
Dec 13 04:03:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.425159512 +0000 UTC m=+0.101827784 container init 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.431825186 +0000 UTC m=+0.108493458 container start 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:03:21 compute-0 funny_dubinsky[245221]: 167 167
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.43671439 +0000 UTC m=+0.113382682 container attach 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:03:21 compute-0 systemd[1]: libpod-87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617.scope: Deactivated successfully.
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.437475971 +0000 UTC m=+0.114144253 container died 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.345272421 +0000 UTC m=+0.021940713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-77b957581150cbc5227acfc7caf77865c4f37dbe0f4cec60a6356836cd9044b8-merged.mount: Deactivated successfully.
Dec 13 04:03:21 compute-0 podman[245205]: 2025-12-13 04:03:21.471136074 +0000 UTC m=+0.147804346 container remove 87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_dubinsky, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:21 compute-0 systemd[1]: libpod-conmon-87740e43d1af229df70fb5cc20c7fad8809e8ce675abf158dfad944d08a68617.scope: Deactivated successfully.
Dec 13 04:03:21 compute-0 podman[245245]: 2025-12-13 04:03:21.633209391 +0000 UTC m=+0.044602065 container create 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:03:21 compute-0 systemd[1]: Started libpod-conmon-3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750.scope.
Dec 13 04:03:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4cf6a224c29513ef6abeaa2f028e231735a05d7384371bf09d2f947803f0a5f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:21 compute-0 podman[245245]: 2025-12-13 04:03:21.613237333 +0000 UTC m=+0.024630027 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4cf6a224c29513ef6abeaa2f028e231735a05d7384371bf09d2f947803f0a5f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4cf6a224c29513ef6abeaa2f028e231735a05d7384371bf09d2f947803f0a5f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4cf6a224c29513ef6abeaa2f028e231735a05d7384371bf09d2f947803f0a5f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:21 compute-0 podman[245245]: 2025-12-13 04:03:21.719398285 +0000 UTC m=+0.130790979 container init 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:03:21 compute-0 podman[245245]: 2025-12-13 04:03:21.725653167 +0000 UTC m=+0.137045841 container start 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Dec 13 04:03:21 compute-0 podman[245245]: 2025-12-13 04:03:21.728719782 +0000 UTC m=+0.140112456 container attach 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]: {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     "0": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "devices": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "/dev/loop3"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             ],
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_name": "ceph_lv0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_size": "21470642176",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "name": "ceph_lv0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "tags": {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_name": "ceph",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.crush_device_class": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.encrypted": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.objectstore": "bluestore",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_id": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.vdo": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.with_tpm": "0"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             },
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "vg_name": "ceph_vg0"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         }
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     ],
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     "1": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "devices": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "/dev/loop4"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             ],
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_name": "ceph_lv1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_size": "21470642176",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "name": "ceph_lv1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "tags": {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_name": "ceph",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.crush_device_class": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.encrypted": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.objectstore": "bluestore",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_id": "1",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.vdo": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.with_tpm": "0"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             },
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "vg_name": "ceph_vg1"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         }
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     ],
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     "2": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "devices": [
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "/dev/loop5"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             ],
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_name": "ceph_lv2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_size": "21470642176",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "name": "ceph_lv2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "tags": {
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.cluster_name": "ceph",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.crush_device_class": "",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.encrypted": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.objectstore": "bluestore",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osd_id": "2",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.vdo": "0",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:                 "ceph.with_tpm": "0"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             },
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "type": "block",
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:             "vg_name": "ceph_vg2"
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:         }
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]:     ]
Dec 13 04:03:22 compute-0 xenodochial_mclaren[245261]: }
Dec 13 04:03:22 compute-0 systemd[1]: libpod-3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750.scope: Deactivated successfully.
Dec 13 04:03:22 compute-0 podman[245245]: 2025-12-13 04:03:22.034028358 +0000 UTC m=+0.445421042 container died 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-b4cf6a224c29513ef6abeaa2f028e231735a05d7384371bf09d2f947803f0a5f-merged.mount: Deactivated successfully.
Dec 13 04:03:22 compute-0 podman[245245]: 2025-12-13 04:03:22.073203893 +0000 UTC m=+0.484596567 container remove 3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_mclaren, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:22 compute-0 systemd[1]: libpod-conmon-3d4f1f16e157bec18cac1d4a1e51ae2e585c531295fd7ac66f5fa633ba0de750.scope: Deactivated successfully.
Dec 13 04:03:22 compute-0 sudo[245169]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:22 compute-0 sudo[245280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:03:22 compute-0 sudo[245280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:22 compute-0 sudo[245280]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:22 compute-0 sudo[245305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:03:22 compute-0 sudo[245305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:22 compute-0 ceph-mon[75071]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.472518248 +0000 UTC m=+0.034136568 container create 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:03:22 compute-0 systemd[1]: Started libpod-conmon-2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89.scope.
Dec 13 04:03:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.458331709 +0000 UTC m=+0.019950059 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.599355538 +0000 UTC m=+0.160973888 container init 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.606108203 +0000 UTC m=+0.167726523 container start 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:03:22 compute-0 vibrant_pascal[245358]: 167 167
Dec 13 04:03:22 compute-0 systemd[1]: libpod-2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89.scope: Deactivated successfully.
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.613412874 +0000 UTC m=+0.175031194 container attach 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.61436496 +0000 UTC m=+0.175983280 container died 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:03:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-8843f5c1a591a3318629f868ec8c75e21c636871d653f291ee8a4e2b80ee47e5-merged.mount: Deactivated successfully.
Dec 13 04:03:22 compute-0 podman[245342]: 2025-12-13 04:03:22.651337544 +0000 UTC m=+0.212955864 container remove 2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:03:22 compute-0 systemd[1]: libpod-conmon-2d6255fbd3da4f72380de0871528a03e9c7070b7727bec44e5f113c937db2c89.scope: Deactivated successfully.
Dec 13 04:03:22 compute-0 podman[245380]: 2025-12-13 04:03:22.809927186 +0000 UTC m=+0.044220125 container create f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:03:22 compute-0 systemd[1]: Started libpod-conmon-f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156.scope.
Dec 13 04:03:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb5ea6736258bcdd809efbe56cc6376c50bf763d3aa0adadba0a24f9dbabf15/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb5ea6736258bcdd809efbe56cc6376c50bf763d3aa0adadba0a24f9dbabf15/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb5ea6736258bcdd809efbe56cc6376c50bf763d3aa0adadba0a24f9dbabf15/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fb5ea6736258bcdd809efbe56cc6376c50bf763d3aa0adadba0a24f9dbabf15/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:03:22 compute-0 podman[245380]: 2025-12-13 04:03:22.788127777 +0000 UTC m=+0.022420716 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:03:22 compute-0 podman[245380]: 2025-12-13 04:03:22.890549187 +0000 UTC m=+0.124842156 container init f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:03:22 compute-0 podman[245380]: 2025-12-13 04:03:22.897350534 +0000 UTC m=+0.131643463 container start f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:03:22 compute-0 podman[245380]: 2025-12-13 04:03:22.900336286 +0000 UTC m=+0.134629215 container attach f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:03:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:23 compute-0 lvm[245475]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:03:23 compute-0 lvm[245475]: VG ceph_vg1 finished
Dec 13 04:03:23 compute-0 lvm[245474]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:03:23 compute-0 lvm[245474]: VG ceph_vg0 finished
Dec 13 04:03:23 compute-0 lvm[245477]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:03:23 compute-0 lvm[245477]: VG ceph_vg2 finished
Dec 13 04:03:23 compute-0 nervous_northcutt[245396]: {}
Dec 13 04:03:23 compute-0 systemd[1]: libpod-f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156.scope: Deactivated successfully.
Dec 13 04:03:23 compute-0 systemd[1]: libpod-f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156.scope: Consumed 1.662s CPU time.
Dec 13 04:03:23 compute-0 podman[245380]: 2025-12-13 04:03:23.927539418 +0000 UTC m=+1.161832357 container died f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fb5ea6736258bcdd809efbe56cc6376c50bf763d3aa0adadba0a24f9dbabf15-merged.mount: Deactivated successfully.
Dec 13 04:03:23 compute-0 podman[245380]: 2025-12-13 04:03:23.978107476 +0000 UTC m=+1.212400425 container remove f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_northcutt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:03:23 compute-0 systemd[1]: libpod-conmon-f21738f9661349872f929dd159f7d2a576f105e7f15e0eefd326d98ada236156.scope: Deactivated successfully.
Dec 13 04:03:24 compute-0 sudo[245305]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:03:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:03:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:24 compute-0 sudo[245490]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:03:24 compute-0 sudo[245490]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:03:24 compute-0 sudo[245490]: pam_unix(sudo:session): session closed for user root
Dec 13 04:03:24 compute-0 ceph-mon[75071]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:03:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.903252) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604903343, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1662, "num_deletes": 507, "total_data_size": 2200930, "memory_usage": 2244144, "flush_reason": "Manual Compaction"}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604915739, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 2157594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13499, "largest_seqno": 15160, "table_properties": {"data_size": 2150403, "index_size": 3685, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 17445, "raw_average_key_size": 18, "raw_value_size": 2133993, "raw_average_value_size": 2246, "num_data_blocks": 169, "num_entries": 950, "num_filter_entries": 950, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598460, "oldest_key_time": 1765598460, "file_creation_time": 1765598604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 12544 microseconds, and 6374 cpu microseconds.
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.915801) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 2157594 bytes OK
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.915819) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.917774) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.917791) EVENT_LOG_v1 {"time_micros": 1765598604917787, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.917809) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 2192653, prev total WAL file size 2192653, number of live WAL files 2.
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.918583) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(2107KB)], [32(7620KB)]
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604918636, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9960670, "oldest_snapshot_seqno": -1}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3938 keys, 7928743 bytes, temperature: kUnknown
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604988218, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7928743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7900093, "index_size": 17724, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9861, "raw_key_size": 96207, "raw_average_key_size": 24, "raw_value_size": 7826495, "raw_average_value_size": 1987, "num_data_blocks": 752, "num_entries": 3938, "num_filter_entries": 3938, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598604, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.988573) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7928743 bytes
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.990467) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.9 rd, 113.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 7.4 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(8.3) write-amplify(3.7) OK, records in: 4965, records dropped: 1027 output_compression: NoCompression
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.990493) EVENT_LOG_v1 {"time_micros": 1765598604990481, "job": 14, "event": "compaction_finished", "compaction_time_micros": 69709, "compaction_time_cpu_micros": 21432, "output_level": 6, "num_output_files": 1, "total_output_size": 7928743, "num_input_records": 4965, "num_output_records": 3938, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604990908, "job": 14, "event": "table_file_deletion", "file_number": 34}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598604992239, "job": 14, "event": "table_file_deletion", "file_number": 32}
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.918413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.992320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.992326) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.992328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.992329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:24 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:03:24.992331) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:03:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:25 compute-0 ceph-mon[75071]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:26 compute-0 podman[245515]: 2025-12-13 04:03:26.913782939 +0000 UTC m=+0.063393590 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:03:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:28 compute-0 ceph-mon[75071]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:30 compute-0 ceph-mon[75071]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:32 compute-0 ceph-mon[75071]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:34 compute-0 ceph-mon[75071]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:03:35.076 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:03:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:03:35.077 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:03:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:03:35.077 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:03:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:35 compute-0 podman[245536]: 2025-12-13 04:03:35.925490377 +0000 UTC m=+0.077037865 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:03:36 compute-0 ceph-mon[75071]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Dec 13 04:03:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1701995678' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 04:03:37 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 04:03:37 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 04:03:37 compute-0 ceph-mgr[75360]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Dec 13 04:03:38 compute-0 ceph-mon[75071]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1701995678' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Dec 13 04:03:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:39 compute-0 ceph-mon[75071]: from='client.14342 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Dec 13 04:03:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:40 compute-0 ceph-mon[75071]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:03:40
Dec 13 04:03:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:03:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:03:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'images', 'vms', 'backups', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'volumes', 'default.rgw.log']
Dec 13 04:03:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:03:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:42 compute-0 ceph-mon[75071]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:03:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:03:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:44 compute-0 ceph-mon[75071]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:03:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897387538' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:03:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:03:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2897387538' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:03:46 compute-0 ceph-mon[75071]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2897387538' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:03:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2897387538' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:03:46 compute-0 podman[245562]: 2025-12-13 04:03:46.895767154 +0000 UTC m=+0.047221137 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:03:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:48 compute-0 ceph-mon[75071]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:50 compute-0 ceph-mon[75071]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:03:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:03:52 compute-0 ceph-mon[75071]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:54 compute-0 ceph-mon[75071]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:03:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:55 compute-0 ceph-mon[75071]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:57 compute-0 podman[245581]: 2025-12-13 04:03:57.900133402 +0000 UTC m=+0.047732351 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Dec 13 04:03:58 compute-0 ceph-mon[75071]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:03:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:00 compute-0 ceph-mon[75071]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:02 compute-0 ceph-mon[75071]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:04 compute-0 ceph-mon[75071]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.623 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.642 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.643 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.644 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.717 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.718 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.718 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.718 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:04:04 compute-0 nova_compute[243704]: 2025-12-13 04:04:04.719 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:04:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/29078233' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.212 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.399 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.401 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.402 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.402 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:04:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/29078233' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.457 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.458 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:04:05 compute-0 nova_compute[243704]: 2025-12-13 04:04:05.483 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:04:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:04:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239382499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.004 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.011 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.023 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.024 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.024 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.258 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.259 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.259 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.259 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:04:06 compute-0 nova_compute[243704]: 2025-12-13 04:04:06.328 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:04:06 compute-0 ceph-mon[75071]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/239382499' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:04:06 compute-0 podman[245645]: 2025-12-13 04:04:06.938064564 +0000 UTC m=+0.085676864 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 04:04:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:08 compute-0 ceph-mon[75071]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:10 compute-0 ceph-mon[75071]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:12 compute-0 ceph-mon[75071]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:14 compute-0 ceph-mon[75071]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:15.000 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:04:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:15.001 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:04:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:15.002 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:04:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:16 compute-0 ceph-mon[75071]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:17 compute-0 podman[245672]: 2025-12-13 04:04:17.90786504 +0000 UTC m=+0.056316924 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 04:04:18 compute-0 ceph-mon[75071]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:20 compute-0 ceph-mon[75071]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:21 compute-0 ceph-mon[75071]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:24 compute-0 sudo[245692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:04:24 compute-0 sudo[245692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:24 compute-0 sudo[245692]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:24 compute-0 ceph-mon[75071]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:24 compute-0 sudo[245717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:04:24 compute-0 sudo[245717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:24 compute-0 sudo[245717]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:04:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:24 compute-0 sudo[245772]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:04:24 compute-0 sudo[245772]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:24 compute-0 sudo[245772]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:25 compute-0 sudo[245797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:04:25 compute-0 sudo[245797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:04:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.293250968 +0000 UTC m=+0.040491854 container create 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:25 compute-0 systemd[1]: Started libpod-conmon-29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11.scope.
Dec 13 04:04:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.367754197 +0000 UTC m=+0.114995083 container init 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.273688635 +0000 UTC m=+0.020929541 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.375130327 +0000 UTC m=+0.122371213 container start 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.378133439 +0000 UTC m=+0.125374335 container attach 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:04:25 compute-0 gracious_bardeen[245849]: 167 167
Dec 13 04:04:25 compute-0 systemd[1]: libpod-29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11.scope: Deactivated successfully.
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.381408489 +0000 UTC m=+0.128649375 container died 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c5493c6016c89b7cc7b43a0f66bd3b69007e8a068dac4a9b8a3eb1d814bad3fb-merged.mount: Deactivated successfully.
Dec 13 04:04:25 compute-0 podman[245833]: 2025-12-13 04:04:25.419375912 +0000 UTC m=+0.166616798 container remove 29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:25 compute-0 systemd[1]: libpod-conmon-29eb7029f999fca8dec3b416a7700faa7a2faa0c26ab68ac3921cd828e12bb11.scope: Deactivated successfully.
Dec 13 04:04:25 compute-0 podman[245872]: 2025-12-13 04:04:25.580568371 +0000 UTC m=+0.043036353 container create 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:04:25 compute-0 systemd[1]: Started libpod-conmon-475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445.scope.
Dec 13 04:04:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:25 compute-0 podman[245872]: 2025-12-13 04:04:25.561115452 +0000 UTC m=+0.023583454 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:25 compute-0 podman[245872]: 2025-12-13 04:04:25.668263329 +0000 UTC m=+0.130731321 container init 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:04:25 compute-0 podman[245872]: 2025-12-13 04:04:25.67822022 +0000 UTC m=+0.140688192 container start 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:04:25 compute-0 podman[245872]: 2025-12-13 04:04:25.681757807 +0000 UTC m=+0.144225799 container attach 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Dec 13 04:04:26 compute-0 suspicious_napier[245888]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:04:26 compute-0 suspicious_napier[245888]: --> All data devices are unavailable
Dec 13 04:04:26 compute-0 systemd[1]: libpod-475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445.scope: Deactivated successfully.
Dec 13 04:04:26 compute-0 podman[245908]: 2025-12-13 04:04:26.182280896 +0000 UTC m=+0.026727110 container died 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b793936f354fdb6952726f7ebcd60d60f48e36fae542ec4a49384bd92c0bbea-merged.mount: Deactivated successfully.
Dec 13 04:04:26 compute-0 podman[245908]: 2025-12-13 04:04:26.220032403 +0000 UTC m=+0.064478597 container remove 475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_napier, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:04:26 compute-0 systemd[1]: libpod-conmon-475eff145e27f237c1f85b18c9b324bc855c51971c00505fedd3cadb25fe1445.scope: Deactivated successfully.
Dec 13 04:04:26 compute-0 ceph-mon[75071]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:26 compute-0 sudo[245797]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:26 compute-0 sudo[245923]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:04:26 compute-0 sudo[245923]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:26 compute-0 sudo[245923]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:26 compute-0 sudo[245948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:04:26 compute-0 sudo[245948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.634744005 +0000 UTC m=+0.037320487 container create e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:04:26 compute-0 systemd[1]: Started libpod-conmon-e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece.scope.
Dec 13 04:04:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.709355007 +0000 UTC m=+0.111931499 container init e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.619755537 +0000 UTC m=+0.022332039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.715933166 +0000 UTC m=+0.118509648 container start e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:26 compute-0 compassionate_hamilton[246001]: 167 167
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.719271387 +0000 UTC m=+0.121847899 container attach e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:04:26 compute-0 systemd[1]: libpod-e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece.scope: Deactivated successfully.
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.720102749 +0000 UTC m=+0.122679231 container died e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-ba991ae50285634154b635f9c5cd01d291144da0957feae279399eae0269446e-merged.mount: Deactivated successfully.
Dec 13 04:04:26 compute-0 podman[245985]: 2025-12-13 04:04:26.75464779 +0000 UTC m=+0.157224272 container remove e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:04:26 compute-0 systemd[1]: libpod-conmon-e063b684b5e64c0eb3c4e62aea647fc803bf27df385d8f42d07c398319e4fece.scope: Deactivated successfully.
Dec 13 04:04:26 compute-0 podman[246025]: 2025-12-13 04:04:26.905185639 +0000 UTC m=+0.038792357 container create 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:04:26 compute-0 systemd[1]: Started libpod-conmon-848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9.scope.
Dec 13 04:04:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6f69d49b2f4b2f2fe4d138fde0989a38944f97c4297217d0fb562e33df07d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6f69d49b2f4b2f2fe4d138fde0989a38944f97c4297217d0fb562e33df07d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6f69d49b2f4b2f2fe4d138fde0989a38944f97c4297217d0fb562e33df07d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae6f69d49b2f4b2f2fe4d138fde0989a38944f97c4297217d0fb562e33df07d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:26 compute-0 podman[246025]: 2025-12-13 04:04:26.981176178 +0000 UTC m=+0.114782786 container init 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:04:26 compute-0 podman[246025]: 2025-12-13 04:04:26.888954907 +0000 UTC m=+0.022561525 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:26 compute-0 podman[246025]: 2025-12-13 04:04:26.989510015 +0000 UTC m=+0.123116613 container start 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:04:26 compute-0 podman[246025]: 2025-12-13 04:04:26.992644281 +0000 UTC m=+0.126250929 container attach 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]: {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     "0": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "devices": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "/dev/loop3"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             ],
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_name": "ceph_lv0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_size": "21470642176",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "name": "ceph_lv0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "tags": {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_name": "ceph",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.crush_device_class": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.encrypted": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.objectstore": "bluestore",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_id": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.vdo": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.with_tpm": "0"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             },
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "vg_name": "ceph_vg0"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         }
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     ],
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     "1": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "devices": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "/dev/loop4"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             ],
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_name": "ceph_lv1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_size": "21470642176",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "name": "ceph_lv1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "tags": {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_name": "ceph",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.crush_device_class": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.encrypted": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.objectstore": "bluestore",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_id": "1",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.vdo": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.with_tpm": "0"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             },
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "vg_name": "ceph_vg1"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         }
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     ],
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     "2": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "devices": [
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "/dev/loop5"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             ],
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_name": "ceph_lv2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_size": "21470642176",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "name": "ceph_lv2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "tags": {
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.cluster_name": "ceph",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.crush_device_class": "",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.encrypted": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.objectstore": "bluestore",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osd_id": "2",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.vdo": "0",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:                 "ceph.with_tpm": "0"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             },
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "type": "block",
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:             "vg_name": "ceph_vg2"
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:         }
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]:     ]
Dec 13 04:04:27 compute-0 friendly_mirzakhani[246042]: }
Dec 13 04:04:27 compute-0 systemd[1]: libpod-848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9.scope: Deactivated successfully.
Dec 13 04:04:27 compute-0 podman[246025]: 2025-12-13 04:04:27.288906238 +0000 UTC m=+0.422512836 container died 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-ae6f69d49b2f4b2f2fe4d138fde0989a38944f97c4297217d0fb562e33df07d7-merged.mount: Deactivated successfully.
Dec 13 04:04:27 compute-0 podman[246025]: 2025-12-13 04:04:27.326725357 +0000 UTC m=+0.460331955 container remove 848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:27 compute-0 systemd[1]: libpod-conmon-848b208185416971015ba1300de0a5a7da2c197a35628590e409066209e1c0f9.scope: Deactivated successfully.
Dec 13 04:04:27 compute-0 sudo[245948]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:27 compute-0 sudo[246064]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:04:27 compute-0 sudo[246064]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:27 compute-0 sudo[246064]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:27 compute-0 sudo[246089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:04:27 compute-0 sudo[246089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.732248529 +0000 UTC m=+0.039945848 container create 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:04:27 compute-0 systemd[1]: Started libpod-conmon-6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb.scope.
Dec 13 04:04:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.713756265 +0000 UTC m=+0.021453614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.811194779 +0000 UTC m=+0.118892118 container init 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.817771057 +0000 UTC m=+0.125468386 container start 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.821213022 +0000 UTC m=+0.128910371 container attach 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:04:27 compute-0 pensive_sutherland[246142]: 167 167
Dec 13 04:04:27 compute-0 systemd[1]: libpod-6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb.scope: Deactivated successfully.
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.822577759 +0000 UTC m=+0.130275088 container died 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-0a76b9c417c411baca52c21dc270b17d8168e5bb0e4d2350957d79382aaf4f17-merged.mount: Deactivated successfully.
Dec 13 04:04:27 compute-0 podman[246126]: 2025-12-13 04:04:27.858835546 +0000 UTC m=+0.166532875 container remove 6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_sutherland, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:04:27 compute-0 systemd[1]: libpod-conmon-6df397988adceb02dbdd67f52e6e9b76f159cc864af6a9f3cf556301dc18a9fb.scope: Deactivated successfully.
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.010617359 +0000 UTC m=+0.037898453 container create 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:04:28 compute-0 systemd[1]: Started libpod-conmon-48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe.scope.
Dec 13 04:04:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eac2b51bd509271bb5f7f08de223276c4eff35e1def31a3929ec872c804185e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eac2b51bd509271bb5f7f08de223276c4eff35e1def31a3929ec872c804185e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eac2b51bd509271bb5f7f08de223276c4eff35e1def31a3929ec872c804185e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4eac2b51bd509271bb5f7f08de223276c4eff35e1def31a3929ec872c804185e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:27.9941575 +0000 UTC m=+0.021438614 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.099824528 +0000 UTC m=+0.127105652 container init 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.107434575 +0000 UTC m=+0.134715669 container start 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.110322604 +0000 UTC m=+0.137603698 container attach 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:04:28 compute-0 podman[246179]: 2025-12-13 04:04:28.117516319 +0000 UTC m=+0.063987923 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:04:28 compute-0 ceph-mon[75071]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:28 compute-0 lvm[246280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:04:28 compute-0 lvm[246281]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:04:28 compute-0 lvm[246280]: VG ceph_vg0 finished
Dec 13 04:04:28 compute-0 lvm[246281]: VG ceph_vg1 finished
Dec 13 04:04:28 compute-0 lvm[246283]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:04:28 compute-0 lvm[246283]: VG ceph_vg2 finished
Dec 13 04:04:28 compute-0 lvm[246284]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:04:28 compute-0 lvm[246284]: VG ceph_vg0 finished
Dec 13 04:04:28 compute-0 jolly_hellman[246182]: {}
Dec 13 04:04:28 compute-0 systemd[1]: libpod-48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe.scope: Deactivated successfully.
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.904389595 +0000 UTC m=+0.931670709 container died 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:04:28 compute-0 systemd[1]: libpod-48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe.scope: Consumed 1.266s CPU time.
Dec 13 04:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4eac2b51bd509271bb5f7f08de223276c4eff35e1def31a3929ec872c804185e-merged.mount: Deactivated successfully.
Dec 13 04:04:28 compute-0 podman[246165]: 2025-12-13 04:04:28.943794448 +0000 UTC m=+0.971075542 container remove 48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:04:28 compute-0 systemd[1]: libpod-conmon-48974a61215f37986934e8a3171932305e764a0900552a52541131e8d05044fe.scope: Deactivated successfully.
Dec 13 04:04:28 compute-0 sudo[246089]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:04:28 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:04:28 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:29 compute-0 sudo[246297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:04:29 compute-0 sudo[246297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:04:29 compute-0 sudo[246297]: pam_unix(sudo:session): session closed for user root
Dec 13 04:04:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:29 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:29 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:04:29 compute-0 ceph-mon[75071]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:32 compute-0 ceph-mon[75071]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:34 compute-0 ceph-mon[75071]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:35.077 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:04:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:35.078 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:04:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:04:35.078 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:04:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:36 compute-0 ceph-mon[75071]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:37 compute-0 podman[246322]: 2025-12-13 04:04:37.954108188 +0000 UTC m=+0.099485501 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:04:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5911 writes, 25K keys, 5911 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5911 writes, 1029 syncs, 5.74 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb9a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55a1efdb98d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 7.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 04:04:38 compute-0 ceph-mon[75071]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:40 compute-0 ceph-mon[75071]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:04:40
Dec 13 04:04:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:04:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:04:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'volumes', 'default.rgw.log', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups']
Dec 13 04:04:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:04:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:42 compute-0 ceph-mon[75071]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:04:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:04:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:44 compute-0 ceph-mon[75071]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:04:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Cumulative writes: 8473 writes, 34K keys, 8473 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 8473 writes, 1804 syncs, 4.70 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      1.29              0.00         1    1.293       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 1.3 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.059       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.008       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcfa30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.1 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1201.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5637d3bcf8d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 04:04:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:04:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807903311' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:04:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:04:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2807903311' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:04:46 compute-0 ceph-mon[75071]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2807903311' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:04:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2807903311' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:04:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:47 compute-0 ceph-mon[75071]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:48 compute-0 podman[246349]: 2025-12-13 04:04:48.896707505 +0000 UTC m=+0.045601303 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Dec 13 04:04:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:04:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Cumulative writes: 5674 writes, 24K keys, 5674 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5674 writes, 907 syncs, 6.26 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.43              0.00         1    0.430       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.4 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e264061a30#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.6 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55e2640618d0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 3.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Dec 13 04:04:50 compute-0 ceph-mon[75071]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:51 compute-0 ceph-mon[75071]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:04:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:04:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:54 compute-0 ceph-mon[75071]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:04:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:55 compute-0 ceph-mon[75071]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:57 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 04:04:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:58 compute-0 ceph-mon[75071]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:58 compute-0 podman[246368]: 2025-12-13 04:04:58.907295912 +0000 UTC m=+0.054132535 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:04:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:04:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:00 compute-0 ceph-mon[75071]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:02 compute-0 ceph-mon[75071]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:03 compute-0 ceph-mon[75071]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.878 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.907 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.908 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.908 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.908 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:05:03 compute-0 nova_compute[243704]: 2025-12-13 04:05:03.909 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:05:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3306203795' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.485 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.687 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.688 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5140MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.689 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.689 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:05:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3306203795' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.843 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.843 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:05:04 compute-0 nova_compute[243704]: 2025-12-13 04:05:04.858 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:05:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:05:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1060469801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:05 compute-0 nova_compute[243704]: 2025-12-13 04:05:05.394 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:05:05 compute-0 nova_compute[243704]: 2025-12-13 04:05:05.401 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:05:05 compute-0 nova_compute[243704]: 2025-12-13 04:05:05.414 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:05:05 compute-0 nova_compute[243704]: 2025-12-13 04:05:05.416 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:05:05 compute-0 nova_compute[243704]: 2025-12-13 04:05:05.416 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.727s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:05:05 compute-0 ceph-mon[75071]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1060469801' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.411 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.411 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.412 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.412 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.423 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.424 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.424 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.424 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:06 compute-0 nova_compute[243704]: 2025-12-13 04:05:06.424 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:05:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:08 compute-0 ceph-mon[75071]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:08 compute-0 podman[246433]: 2025-12-13 04:05:08.936567138 +0000 UTC m=+0.078891989 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:05:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:10 compute-0 ceph-mon[75071]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:12 compute-0 ceph-mon[75071]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:14 compute-0 ceph-mon[75071]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:16 compute-0 ceph-mon[75071]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:18 compute-0 ceph-mon[75071]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:19 compute-0 podman[246459]: 2025-12-13 04:05:19.896839717 +0000 UTC m=+0.046459947 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:05:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:20 compute-0 ceph-mon[75071]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:21 compute-0 ceph-mon[75071]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:24 compute-0 ceph-mon[75071]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:26 compute-0 ceph-mon[75071]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:28 compute-0 ceph-mon[75071]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:29 compute-0 sudo[246478]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:05:29 compute-0 sudo[246478]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:29 compute-0 sudo[246478]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:29 compute-0 podman[246502]: 2025-12-13 04:05:29.263131262 +0000 UTC m=+0.082479236 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd)
Dec 13 04:05:29 compute-0 sudo[246509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 04:05:29 compute-0 sudo[246509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:29 compute-0 sudo[246509]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:05:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:29 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:05:30 compute-0 ceph-mon[75071]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:30 compute-0 sudo[246568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:05:30 compute-0 sudo[246568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:30 compute-0 sudo[246568]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:30 compute-0 sudo[246593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:05:30 compute-0 sudo[246593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:31 compute-0 sudo[246593]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:31 compute-0 sudo[246650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:05:31 compute-0 sudo[246650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:31 compute-0 sudo[246650]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:31 compute-0 sudo[246675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:05:31 compute-0 sudo[246675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:05:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.518126373 +0000 UTC m=+0.025657379 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.881953271 +0000 UTC m=+0.389484297 container create 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:05:31 compute-0 systemd[1]: Started libpod-conmon-5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6.scope.
Dec 13 04:05:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.964754925 +0000 UTC m=+0.472285941 container init 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.972371642 +0000 UTC m=+0.479902628 container start 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.975532048 +0000 UTC m=+0.483063034 container attach 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:05:31 compute-0 systemd[1]: libpod-5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6.scope: Deactivated successfully.
Dec 13 04:05:31 compute-0 heuristic_robinson[246729]: 167 167
Dec 13 04:05:31 compute-0 podman[246713]: 2025-12-13 04:05:31.979173508 +0000 UTC m=+0.486704494 container died 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:05:31 compute-0 conmon[246729]: conmon 5386a0e34caf5d292c52 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6.scope/container/memory.events
Dec 13 04:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-49820a018ba16f354b6901b1eabcac52c3a834f90d9be632f6a224c5517a2c40-merged.mount: Deactivated successfully.
Dec 13 04:05:32 compute-0 podman[246713]: 2025-12-13 04:05:32.014934802 +0000 UTC m=+0.522465788 container remove 5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=heuristic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:05:32 compute-0 systemd[1]: libpod-conmon-5386a0e34caf5d292c52a0c4c340a8065cdc72a8680dd6b51afb62f457cdfaa6.scope: Deactivated successfully.
Dec 13 04:05:32 compute-0 podman[246753]: 2025-12-13 04:05:32.160931067 +0000 UTC m=+0.036262408 container create 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:32 compute-0 systemd[1]: Started libpod-conmon-6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e.scope.
Dec 13 04:05:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:32 compute-0 podman[246753]: 2025-12-13 04:05:32.234010387 +0000 UTC m=+0.109341758 container init 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:05:32 compute-0 podman[246753]: 2025-12-13 04:05:32.146292468 +0000 UTC m=+0.021623829 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:32 compute-0 podman[246753]: 2025-12-13 04:05:32.24258602 +0000 UTC m=+0.117917361 container start 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:05:32 compute-0 podman[246753]: 2025-12-13 04:05:32.245691535 +0000 UTC m=+0.121022896 container attach 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:05:32 compute-0 ceph-mon[75071]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:32 compute-0 angry_roentgen[246769]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:05:32 compute-0 angry_roentgen[246769]: --> All data devices are unavailable
Dec 13 04:05:32 compute-0 systemd[1]: libpod-6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e.scope: Deactivated successfully.
Dec 13 04:05:32 compute-0 podman[246789]: 2025-12-13 04:05:32.745262767 +0000 UTC m=+0.029995807 container died 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:05:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-1f8372ad9a05ac2a461dd8d1975ccaf2a252d38a6a4ec9f695a36707e01f6bb3-merged.mount: Deactivated successfully.
Dec 13 04:05:32 compute-0 podman[246789]: 2025-12-13 04:05:32.790007956 +0000 UTC m=+0.074740966 container remove 6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_roentgen, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:05:32 compute-0 systemd[1]: libpod-conmon-6ab8fbec0824c70ffebc2be64f2d95883b9529c59dcc2f3bd33fb22d7726d18e.scope: Deactivated successfully.
Dec 13 04:05:32 compute-0 sudo[246675]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:32 compute-0 sudo[246804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:05:32 compute-0 sudo[246804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:32 compute-0 sudo[246804]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:32 compute-0 sudo[246829]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:05:32 compute-0 sudo[246829]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.20881777 +0000 UTC m=+0.019473201 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.434153565 +0000 UTC m=+0.244808966 container create a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:33 compute-0 systemd[1]: Started libpod-conmon-a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764.scope.
Dec 13 04:05:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.523740464 +0000 UTC m=+0.334395885 container init a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.529624224 +0000 UTC m=+0.340279625 container start a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.532961225 +0000 UTC m=+0.343616626 container attach a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:05:33 compute-0 systemd[1]: libpod-a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764.scope: Deactivated successfully.
Dec 13 04:05:33 compute-0 festive_cori[246883]: 167 167
Dec 13 04:05:33 compute-0 conmon[246883]: conmon a858050bc2b826e67110 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764.scope/container/memory.events
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.535730201 +0000 UTC m=+0.346385592 container died a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-243aa56a437dd609902061630c7f9d4175f40a26f4e5c6e793bfdb7608e7adb4-merged.mount: Deactivated successfully.
Dec 13 04:05:33 compute-0 podman[246867]: 2025-12-13 04:05:33.571012722 +0000 UTC m=+0.381668123 container remove a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cori, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:05:33 compute-0 systemd[1]: libpod-conmon-a858050bc2b826e671100c1ed17b02939b613cf4756e2567816eb97614f3d764.scope: Deactivated successfully.
Dec 13 04:05:33 compute-0 podman[246905]: 2025-12-13 04:05:33.7531035 +0000 UTC m=+0.044600916 container create 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:05:33 compute-0 systemd[1]: Started libpod-conmon-3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60.scope.
Dec 13 04:05:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ff737ecd5df678594e9cce3ff2394ebf61172c5295fc09a2850ef8997b900/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ff737ecd5df678594e9cce3ff2394ebf61172c5295fc09a2850ef8997b900/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ff737ecd5df678594e9cce3ff2394ebf61172c5295fc09a2850ef8997b900/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/935ff737ecd5df678594e9cce3ff2394ebf61172c5295fc09a2850ef8997b900/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:33 compute-0 podman[246905]: 2025-12-13 04:05:33.825799899 +0000 UTC m=+0.117297345 container init 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:33 compute-0 podman[246905]: 2025-12-13 04:05:33.73181668 +0000 UTC m=+0.023314106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:33 compute-0 podman[246905]: 2025-12-13 04:05:33.833779577 +0000 UTC m=+0.125276973 container start 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:33 compute-0 podman[246905]: 2025-12-13 04:05:33.838796153 +0000 UTC m=+0.130293559 container attach 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:05:34 compute-0 determined_leakey[246921]: {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     "0": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "devices": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "/dev/loop3"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             ],
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_name": "ceph_lv0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_size": "21470642176",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "name": "ceph_lv0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "tags": {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_name": "ceph",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.crush_device_class": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.encrypted": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.objectstore": "bluestore",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_id": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.vdo": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.with_tpm": "0"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             },
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "vg_name": "ceph_vg0"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         }
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     ],
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     "1": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "devices": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "/dev/loop4"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             ],
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_name": "ceph_lv1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_size": "21470642176",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "name": "ceph_lv1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "tags": {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_name": "ceph",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.crush_device_class": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.encrypted": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.objectstore": "bluestore",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_id": "1",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.vdo": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.with_tpm": "0"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             },
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "vg_name": "ceph_vg1"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         }
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     ],
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     "2": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "devices": [
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "/dev/loop5"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             ],
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_name": "ceph_lv2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_size": "21470642176",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "name": "ceph_lv2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "tags": {
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.cluster_name": "ceph",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.crush_device_class": "",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.encrypted": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.objectstore": "bluestore",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osd_id": "2",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.vdo": "0",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:                 "ceph.with_tpm": "0"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             },
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "type": "block",
Dec 13 04:05:34 compute-0 determined_leakey[246921]:             "vg_name": "ceph_vg2"
Dec 13 04:05:34 compute-0 determined_leakey[246921]:         }
Dec 13 04:05:34 compute-0 determined_leakey[246921]:     ]
Dec 13 04:05:34 compute-0 determined_leakey[246921]: }
Dec 13 04:05:34 compute-0 systemd[1]: libpod-3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60.scope: Deactivated successfully.
Dec 13 04:05:34 compute-0 podman[246905]: 2025-12-13 04:05:34.102412611 +0000 UTC m=+0.393910017 container died 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-935ff737ecd5df678594e9cce3ff2394ebf61172c5295fc09a2850ef8997b900-merged.mount: Deactivated successfully.
Dec 13 04:05:34 compute-0 podman[246905]: 2025-12-13 04:05:34.140822637 +0000 UTC m=+0.432320033 container remove 3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_leakey, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:05:34 compute-0 systemd[1]: libpod-conmon-3a58fc8a16fa1136ba1abe38944c34330a3b540e0b805f7b4c7d367cf346dc60.scope: Deactivated successfully.
Dec 13 04:05:34 compute-0 sudo[246829]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:34 compute-0 sudo[246941]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:05:34 compute-0 sudo[246941]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:34 compute-0 sudo[246941]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:34 compute-0 sudo[246966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:05:34 compute-0 sudo[246966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:34 compute-0 ceph-mon[75071]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.462319) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734462399, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1264, "num_deletes": 251, "total_data_size": 1959900, "memory_usage": 1992656, "flush_reason": "Manual Compaction"}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734473489, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 1930503, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 15161, "largest_seqno": 16424, "table_properties": {"data_size": 1924540, "index_size": 3294, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 12515, "raw_average_key_size": 19, "raw_value_size": 1912527, "raw_average_value_size": 2997, "num_data_blocks": 151, "num_entries": 638, "num_filter_entries": 638, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598605, "oldest_key_time": 1765598605, "file_creation_time": 1765598734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 11215 microseconds, and 5331 cpu microseconds.
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.473540) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 1930503 bytes OK
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.473562) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.475082) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.475101) EVENT_LOG_v1 {"time_micros": 1765598734475095, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.475120) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1954185, prev total WAL file size 1954185, number of live WAL files 2.
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.475821) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(1885KB)], [35(7742KB)]
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734475891, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9859246, "oldest_snapshot_seqno": -1}
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.543780679 +0000 UTC m=+0.020931031 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4062 keys, 8036084 bytes, temperature: kUnknown
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734822091, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 8036084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8006588, "index_size": 18250, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 99295, "raw_average_key_size": 24, "raw_value_size": 7930734, "raw_average_value_size": 1952, "num_data_blocks": 772, "num_entries": 4062, "num_filter_entries": 4062, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598734, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.822371) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 8036084 bytes
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.824961) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 28.5 rd, 23.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.6 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(9.3) write-amplify(4.2) OK, records in: 4576, records dropped: 514 output_compression: NoCompression
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.825000) EVENT_LOG_v1 {"time_micros": 1765598734824984, "job": 16, "event": "compaction_finished", "compaction_time_micros": 346318, "compaction_time_cpu_micros": 21227, "output_level": 6, "num_output_files": 1, "total_output_size": 8036084, "num_input_records": 4576, "num_output_records": 4062, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734825456, "job": 16, "event": "table_file_deletion", "file_number": 37}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598734826735, "job": 16, "event": "table_file_deletion", "file_number": 35}
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.475747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.826775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.826779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.826782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.826783) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:05:34.826784) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.826878607 +0000 UTC m=+0.304028939 container create 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:05:34 compute-0 systemd[1]: Started libpod-conmon-528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e.scope.
Dec 13 04:05:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.913062104 +0000 UTC m=+0.390212486 container init 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.920731983 +0000 UTC m=+0.397882305 container start 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.924529116 +0000 UTC m=+0.401679498 container attach 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:05:34 compute-0 competent_mcclintock[247019]: 167 167
Dec 13 04:05:34 compute-0 systemd[1]: libpod-528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e.scope: Deactivated successfully.
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.925525954 +0000 UTC m=+0.402676286 container died 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:05:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a1011129df50ea64ae9d67559f80175dea186e01637db11893adc4ac5c195af-merged.mount: Deactivated successfully.
Dec 13 04:05:34 compute-0 podman[247003]: 2025-12-13 04:05:34.958175572 +0000 UTC m=+0.435325904 container remove 528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mcclintock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:05:34 compute-0 systemd[1]: libpod-conmon-528b39c819236d659c9e8a8fd83a9f7845d74cf233b70ce9998a712378ce2e0e.scope: Deactivated successfully.
Dec 13 04:05:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:05:35.078 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:05:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:05:35.080 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:05:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:05:35.080 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.103352705 +0000 UTC m=+0.042452106 container create 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:05:35 compute-0 systemd[1]: Started libpod-conmon-2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b.scope.
Dec 13 04:05:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b385f2778f1c2c3ae75d85bec35793aa74e24cfc358ffa7ea99cd4363351a9f5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b385f2778f1c2c3ae75d85bec35793aa74e24cfc358ffa7ea99cd4363351a9f5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b385f2778f1c2c3ae75d85bec35793aa74e24cfc358ffa7ea99cd4363351a9f5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b385f2778f1c2c3ae75d85bec35793aa74e24cfc358ffa7ea99cd4363351a9f5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.084730108 +0000 UTC m=+0.023829539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.188394921 +0000 UTC m=+0.127494322 container init 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.199648557 +0000 UTC m=+0.138747938 container start 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.203373129 +0000 UTC m=+0.142472530 container attach 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:35 compute-0 lvm[247136]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:05:35 compute-0 lvm[247137]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:05:35 compute-0 lvm[247136]: VG ceph_vg0 finished
Dec 13 04:05:35 compute-0 lvm[247137]: VG ceph_vg1 finished
Dec 13 04:05:35 compute-0 lvm[247139]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:05:35 compute-0 lvm[247139]: VG ceph_vg2 finished
Dec 13 04:05:35 compute-0 great_sammet[247058]: {}
Dec 13 04:05:35 compute-0 systemd[1]: libpod-2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b.scope: Deactivated successfully.
Dec 13 04:05:35 compute-0 systemd[1]: libpod-2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b.scope: Consumed 1.301s CPU time.
Dec 13 04:05:35 compute-0 podman[247042]: 2025-12-13 04:05:35.986256276 +0000 UTC m=+0.925355687 container died 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:05:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:38 compute-0 ceph-mon[75071]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b385f2778f1c2c3ae75d85bec35793aa74e24cfc358ffa7ea99cd4363351a9f5-merged.mount: Deactivated successfully.
Dec 13 04:05:38 compute-0 podman[247042]: 2025-12-13 04:05:38.837877512 +0000 UTC m=+3.776976913 container remove 2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_sammet, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:05:38 compute-0 systemd[1]: libpod-conmon-2e7a8384ceecf57b52df7dc756809246482266e7b64d097d168228b5d995a95b.scope: Deactivated successfully.
Dec 13 04:05:38 compute-0 sudo[246966]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:05:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:05:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:39 compute-0 sudo[247153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:05:39 compute-0 sudo[247153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:05:39 compute-0 sudo[247153]: pam_unix(sudo:session): session closed for user root
Dec 13 04:05:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:39 compute-0 podman[247177]: 2025-12-13 04:05:39.273951917 +0000 UTC m=+0.082309233 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:05:39 compute-0 ceph-mon[75071]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:05:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:05:40
Dec 13 04:05:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:05:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:05:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', '.rgw.root', 'volumes', 'backups', 'default.rgw.log', 'vms']
Dec 13 04:05:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:05:40 compute-0 ceph-mon[75071]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:05:42 compute-0 ceph-mon[75071]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:05:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:05:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:44 compute-0 ceph-mon[75071]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:05:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/604093466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:05:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:05:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/604093466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:05:46 compute-0 ceph-mon[75071]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/604093466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:05:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/604093466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:05:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:48 compute-0 ceph-mon[75071]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:05:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 04:05:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:50 compute-0 ceph-mon[75071]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Dec 13 04:05:50 compute-0 podman[247204]: 2025-12-13 04:05:50.908858711 +0000 UTC m=+0.053144558 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:05:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:05:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:05:52 compute-0 ceph-mon[75071]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:53 compute-0 ceph-mon[75071]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:05:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:56 compute-0 ceph-mon[75071]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:58 compute-0 ceph-mon[75071]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:05:59 compute-0 podman[247223]: 2025-12-13 04:05:59.914159905 +0000 UTC m=+0.067601891 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:05:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:00 compute-0 ceph-mon[75071]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:06:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 04:06:02 compute-0 ceph-mon[75071]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 18 op/s
Dec 13 04:06:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.915 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.916 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.916 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.916 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:06:03 compute-0 nova_compute[243704]: 2025-12-13 04:06:03.917 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:06:04 compute-0 ceph-mon[75071]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402876540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.370 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.524 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.526 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5152MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.526 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.526 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:06:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1402876540' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.596 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.597 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:06:05 compute-0 nova_compute[243704]: 2025-12-13 04:06:05.622 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:06:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:06:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3084089027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:06 compute-0 nova_compute[243704]: 2025-12-13 04:06:06.151 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:06:06 compute-0 nova_compute[243704]: 2025-12-13 04:06:06.158 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:06:06 compute-0 nova_compute[243704]: 2025-12-13 04:06:06.173 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:06:06 compute-0 nova_compute[243704]: 2025-12-13 04:06:06.174 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:06:06 compute-0 nova_compute[243704]: 2025-12-13 04:06:06.175 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:06:06 compute-0 ceph-mon[75071]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3084089027' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.174 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.175 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.200 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.200 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.200 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.217 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.218 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.218 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.218 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:06:07 compute-0 nova_compute[243704]: 2025-12-13 04:06:07.219 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:06:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:08 compute-0 ceph-mon[75071]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:10 compute-0 podman[247290]: 2025-12-13 04:06:10.009868589 +0000 UTC m=+0.158373834 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:06:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:10 compute-0 ceph-mon[75071]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:12 compute-0 ceph-mon[75071]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:13 compute-0 ceph-mon[75071]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:16 compute-0 ceph-mon[75071]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:18 compute-0 ceph-mon[75071]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:20 compute-0 ceph-mon[75071]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:21 compute-0 podman[247317]: 2025-12-13 04:06:21.90917328 +0000 UTC m=+0.052951750 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:06:22 compute-0 ceph-mon[75071]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:24 compute-0 ceph-mon[75071]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:26 compute-0 ceph-mon[75071]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:28 compute-0 ceph-mon[75071]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:30 compute-0 ceph-mon[75071]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:30 compute-0 podman[247336]: 2025-12-13 04:06:30.916049085 +0000 UTC m=+0.065405990 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:32 compute-0 ceph-mon[75071]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:34 compute-0 ceph-mon[75071]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:06:35.079 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:06:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:06:35.080 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:06:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:06:35.080 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:06:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:36 compute-0 ceph-mon[75071]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:38 compute-0 ceph-mon[75071]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:39 compute-0 sudo[247356]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:06:39 compute-0 sudo[247356]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:39 compute-0 sudo[247356]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:39 compute-0 sudo[247381]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:06:39 compute-0 sudo[247381]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:39 compute-0 sudo[247381]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:06:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:06:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:06:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:06:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:06:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:06:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:06:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:06:40 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:06:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:40 compute-0 sudo[247437]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:06:40 compute-0 sudo[247437]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:40 compute-0 sudo[247437]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:40 compute-0 sudo[247468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:06:40 compute-0 sudo[247468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:40 compute-0 podman[247461]: 2025-12-13 04:06:40.193268257 +0000 UTC m=+0.085404978 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:06:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:40 compute-0 ceph-mon[75071]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:06:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.467610493 +0000 UTC m=+0.035059960 container create 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:40 compute-0 systemd[1]: Started libpod-conmon-5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850.scope.
Dec 13 04:06:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.451195924 +0000 UTC m=+0.018645391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.554679565 +0000 UTC m=+0.122129052 container init 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:06:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:06:40
Dec 13 04:06:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:06:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:06:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data']
Dec 13 04:06:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.565375688 +0000 UTC m=+0.132825165 container start 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.568408431 +0000 UTC m=+0.135857918 container attach 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:40 compute-0 systemd[1]: libpod-5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850.scope: Deactivated successfully.
Dec 13 04:06:40 compute-0 brave_chatterjee[247543]: 167 167
Dec 13 04:06:40 compute-0 conmon[247543]: conmon 5e553df6549410d9c503 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850.scope/container/memory.events
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.570982102 +0000 UTC m=+0.138431569 container died 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:06:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-671a13586552c9f2707eb76a51e691d983c02e68a1609fbfffb40892c0bca1d1-merged.mount: Deactivated successfully.
Dec 13 04:06:40 compute-0 podman[247527]: 2025-12-13 04:06:40.608557439 +0000 UTC m=+0.176006926 container remove 5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_chatterjee, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:40 compute-0 systemd[1]: libpod-conmon-5e553df6549410d9c5033b89897214da66160b29467c0e3c6748f14f30408850.scope: Deactivated successfully.
Dec 13 04:06:40 compute-0 podman[247566]: 2025-12-13 04:06:40.827391396 +0000 UTC m=+0.068443843 container create 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:06:40 compute-0 systemd[1]: Started libpod-conmon-9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6.scope.
Dec 13 04:06:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:40 compute-0 podman[247566]: 2025-12-13 04:06:40.804099219 +0000 UTC m=+0.045151696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:40 compute-0 podman[247566]: 2025-12-13 04:06:40.908745782 +0000 UTC m=+0.149798239 container init 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:06:40 compute-0 podman[247566]: 2025-12-13 04:06:40.918208952 +0000 UTC m=+0.159261419 container start 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:40 compute-0 podman[247566]: 2025-12-13 04:06:40.922408386 +0000 UTC m=+0.163460833 container attach 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:06:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:41 compute-0 eager_germain[247583]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:06:41 compute-0 eager_germain[247583]: --> All data devices are unavailable
Dec 13 04:06:41 compute-0 systemd[1]: libpod-9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6.scope: Deactivated successfully.
Dec 13 04:06:41 compute-0 conmon[247583]: conmon 9e3ff2c63f22254a7afb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6.scope/container/memory.events
Dec 13 04:06:41 compute-0 podman[247566]: 2025-12-13 04:06:41.445363624 +0000 UTC m=+0.686416111 container died 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfcef99a9f1b24829196278b6260ca30c538c7146f796db5c327c28ce209d615-merged.mount: Deactivated successfully.
Dec 13 04:06:41 compute-0 podman[247566]: 2025-12-13 04:06:41.491009973 +0000 UTC m=+0.732062460 container remove 9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_germain, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:06:41 compute-0 systemd[1]: libpod-conmon-9e3ff2c63f22254a7afba7674f98f6b4def457d485ff47d13ca9882cb27394a6.scope: Deactivated successfully.
Dec 13 04:06:41 compute-0 sudo[247468]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:41 compute-0 sudo[247615]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:06:41 compute-0 sudo[247615]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:41 compute-0 sudo[247615]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:41 compute-0 sudo[247640]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:06:41 compute-0 sudo[247640]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:41 compute-0 podman[247677]: 2025-12-13 04:06:41.953967719 +0000 UTC m=+0.045385893 container create e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:06:41 compute-0 systemd[1]: Started libpod-conmon-e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3.scope.
Dec 13 04:06:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:42.017435466 +0000 UTC m=+0.108853700 container init e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:42.022475694 +0000 UTC m=+0.113893858 container start e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:41.930945949 +0000 UTC m=+0.022364173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:42.026484913 +0000 UTC m=+0.117903097 container attach e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:42 compute-0 affectionate_keldysh[247693]: 167 167
Dec 13 04:06:42 compute-0 systemd[1]: libpod-e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3.scope: Deactivated successfully.
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:42.027948284 +0000 UTC m=+0.119366448 container died e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-84dc4042fc2b68d6c1ba2fad8d05a5acf7331575e2138e1a2f21416b28262cb3-merged.mount: Deactivated successfully.
Dec 13 04:06:42 compute-0 podman[247677]: 2025-12-13 04:06:42.063941769 +0000 UTC m=+0.155359933 container remove e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:06:42 compute-0 systemd[1]: libpod-conmon-e67eeddab879bf006a9a02d6bfe3f5c0ac1b82413d305298df3d710c36b762c3.scope: Deactivated successfully.
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.25883855 +0000 UTC m=+0.062858670 container create edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:06:42 compute-0 systemd[1]: Started libpod-conmon-edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43.scope.
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.23690285 +0000 UTC m=+0.040923020 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:06:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25322b7c1719a51a26092f944e1b7dbfbe57b51868465bef6155756ad168c6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25322b7c1719a51a26092f944e1b7dbfbe57b51868465bef6155756ad168c6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25322b7c1719a51a26092f944e1b7dbfbe57b51868465bef6155756ad168c6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b25322b7c1719a51a26092f944e1b7dbfbe57b51868465bef6155756ad168c6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.362377833 +0000 UTC m=+0.166397993 container init edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.372587263 +0000 UTC m=+0.176607363 container start edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.376252373 +0000 UTC m=+0.180272493 container attach edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:06:42 compute-0 ceph-mon[75071]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:06:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]: {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     "0": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "devices": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "/dev/loop3"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             ],
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_name": "ceph_lv0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_size": "21470642176",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "name": "ceph_lv0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "tags": {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_name": "ceph",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.crush_device_class": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.encrypted": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.objectstore": "bluestore",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_id": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.vdo": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.with_tpm": "0"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             },
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "vg_name": "ceph_vg0"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         }
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     ],
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     "1": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "devices": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "/dev/loop4"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             ],
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_name": "ceph_lv1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_size": "21470642176",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "name": "ceph_lv1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "tags": {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_name": "ceph",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.crush_device_class": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.encrypted": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.objectstore": "bluestore",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_id": "1",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.vdo": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.with_tpm": "0"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             },
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "vg_name": "ceph_vg1"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         }
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     ],
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     "2": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "devices": [
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "/dev/loop5"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             ],
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_name": "ceph_lv2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_size": "21470642176",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "name": "ceph_lv2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "tags": {
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.cluster_name": "ceph",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.crush_device_class": "",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.encrypted": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.objectstore": "bluestore",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osd_id": "2",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.vdo": "0",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:                 "ceph.with_tpm": "0"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             },
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "type": "block",
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:             "vg_name": "ceph_vg2"
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:         }
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]:     ]
Dec 13 04:06:42 compute-0 quizzical_lederberg[247735]: }
Dec 13 04:06:42 compute-0 systemd[1]: libpod-edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43.scope: Deactivated successfully.
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.722759754 +0000 UTC m=+0.526779874 container died edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b25322b7c1719a51a26092f944e1b7dbfbe57b51868465bef6155756ad168c6d-merged.mount: Deactivated successfully.
Dec 13 04:06:42 compute-0 podman[247718]: 2025-12-13 04:06:42.773982865 +0000 UTC m=+0.578002975 container remove edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_lederberg, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:06:42 compute-0 systemd[1]: libpod-conmon-edeb320f8462b57d0b3e18024cae12ebbe796dbf4e9839dc891aff9a15db1b43.scope: Deactivated successfully.
Dec 13 04:06:42 compute-0 sudo[247640]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:42 compute-0 sudo[247756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:06:42 compute-0 sudo[247756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:42 compute-0 sudo[247756]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:42 compute-0 sudo[247781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:06:42 compute-0 sudo[247781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.30266685 +0000 UTC m=+0.046195816 container create df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:43 compute-0 systemd[1]: Started libpod-conmon-df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365.scope.
Dec 13 04:06:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.28332781 +0000 UTC m=+0.026856906 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.387645094 +0000 UTC m=+0.131174050 container init df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.394879343 +0000 UTC m=+0.138408289 container start df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.398337747 +0000 UTC m=+0.141866693 container attach df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:06:43 compute-0 loving_lewin[247836]: 167 167
Dec 13 04:06:43 compute-0 systemd[1]: libpod-df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365.scope: Deactivated successfully.
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.399362235 +0000 UTC m=+0.142891201 container died df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-8009b76502ee769c994ba3467458f2b23b4103217f0ff4de8b4a3effa8f1dffa-merged.mount: Deactivated successfully.
Dec 13 04:06:43 compute-0 podman[247819]: 2025-12-13 04:06:43.438344761 +0000 UTC m=+0.181873717 container remove df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:06:43 compute-0 systemd[1]: libpod-conmon-df49e6ff90af37693d6455d3bf276dab2fa4cc6c026b0035d4c7d1bd5fe4b365.scope: Deactivated successfully.
Dec 13 04:06:43 compute-0 podman[247858]: 2025-12-13 04:06:43.607430018 +0000 UTC m=+0.046479773 container create 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:06:43 compute-0 systemd[1]: Started libpod-conmon-9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153.scope.
Dec 13 04:06:43 compute-0 podman[247858]: 2025-12-13 04:06:43.588818659 +0000 UTC m=+0.027868444 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:06:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a43c9392478b0af84c485801878639415f544c5ed3dc2ebfcf82a45cc66a39a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a43c9392478b0af84c485801878639415f544c5ed3dc2ebfcf82a45cc66a39a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a43c9392478b0af84c485801878639415f544c5ed3dc2ebfcf82a45cc66a39a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a43c9392478b0af84c485801878639415f544c5ed3dc2ebfcf82a45cc66a39a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:06:43 compute-0 podman[247858]: 2025-12-13 04:06:43.701980064 +0000 UTC m=+0.141029849 container init 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:06:43 compute-0 podman[247858]: 2025-12-13 04:06:43.708101633 +0000 UTC m=+0.147151388 container start 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:06:43 compute-0 podman[247858]: 2025-12-13 04:06:43.71130285 +0000 UTC m=+0.150352655 container attach 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:06:44 compute-0 lvm[247950]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:06:44 compute-0 lvm[247950]: VG ceph_vg0 finished
Dec 13 04:06:44 compute-0 lvm[247953]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:06:44 compute-0 lvm[247953]: VG ceph_vg1 finished
Dec 13 04:06:44 compute-0 ceph-mon[75071]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:44 compute-0 lvm[247955]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:06:44 compute-0 lvm[247955]: VG ceph_vg2 finished
Dec 13 04:06:44 compute-0 lvm[247956]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:06:44 compute-0 lvm[247956]: VG ceph_vg1 finished
Dec 13 04:06:44 compute-0 quizzical_borg[247874]: {}
Dec 13 04:06:44 compute-0 systemd[1]: libpod-9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153.scope: Deactivated successfully.
Dec 13 04:06:44 compute-0 systemd[1]: libpod-9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153.scope: Consumed 1.300s CPU time.
Dec 13 04:06:44 compute-0 podman[247858]: 2025-12-13 04:06:44.505239612 +0000 UTC m=+0.944289377 container died 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a43c9392478b0af84c485801878639415f544c5ed3dc2ebfcf82a45cc66a39a-merged.mount: Deactivated successfully.
Dec 13 04:06:44 compute-0 podman[247858]: 2025-12-13 04:06:44.552616658 +0000 UTC m=+0.991666413 container remove 9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:06:44 compute-0 systemd[1]: libpod-conmon-9be80065227d53e4a84c483126562ca091596cff9dadf7dece20e41eac523153.scope: Deactivated successfully.
Dec 13 04:06:44 compute-0 sudo[247781]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:06:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:06:44 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:44 compute-0 sudo[247971]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:06:44 compute-0 sudo[247971]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:06:44 compute-0 sudo[247971]: pam_unix(sudo:session): session closed for user root
Dec 13 04:06:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:45 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:06:45 compute-0 ceph-mon[75071]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:06:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837994705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:06:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:06:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3837994705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:06:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3837994705' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:06:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3837994705' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:06:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:47 compute-0 ceph-mon[75071]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:50 compute-0 ceph-mon[75071]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4209325032409005e-06 of space, bias 4.0, pg target 0.0017051190038890806 quantized to 16 (current 16)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:06:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:06:52 compute-0 ceph-mon[75071]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:52 compute-0 podman[247996]: 2025-12-13 04:06:52.944470078 +0000 UTC m=+0.093996153 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:06:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:55 compute-0 ceph-mon[75071]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:06:56 compute-0 ceph-mon[75071]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:58 compute-0 ceph-mon[75071]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:06:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:00 compute-0 ceph-mon[75071]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:01 compute-0 podman[248017]: 2025-12-13 04:07:01.979259938 +0000 UTC m=+0.104602844 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:07:02 compute-0 ceph-mon[75071]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.905 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.908 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.909 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:07:02 compute-0 nova_compute[243704]: 2025-12-13 04:07:02.941 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:04 compute-0 ceph-mon[75071]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:04 compute-0 nova_compute[243704]: 2025-12-13 04:07:04.960 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:04 compute-0 nova_compute[243704]: 2025-12-13 04:07:04.961 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:07:04 compute-0 nova_compute[243704]: 2025-12-13 04:07:04.961 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:07:04 compute-0 nova_compute[243704]: 2025-12-13 04:07:04.979 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:07:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.906 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.907 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.907 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.907 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:07:05 compute-0 nova_compute[243704]: 2025-12-13 04:07:05.907 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:07:06 compute-0 ceph-mon[75071]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4116407299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.456 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.612 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.613 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5142MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.614 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.614 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.784 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.784 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.874 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.953 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.953 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.970 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:07:06 compute-0 nova_compute[243704]: 2025-12-13 04:07:06.992 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.008 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:07:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4116407299' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:07:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/236574676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.578 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.585 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.606 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.608 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:07:07 compute-0 nova_compute[243704]: 2025-12-13 04:07:07.608 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.994s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:07:08 compute-0 ceph-mon[75071]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/236574676' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.604 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.604 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.605 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.605 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.605 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:08 compute-0 nova_compute[243704]: 2025-12-13 04:07:08.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:07:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:10 compute-0 ceph-mon[75071]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:10 compute-0 podman[248082]: 2025-12-13 04:07:10.95510387 +0000 UTC m=+0.103715998 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:07:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:12 compute-0 ceph-mon[75071]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:14 compute-0 ceph-mon[75071]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Dec 13 04:07:16 compute-0 ceph-mon[75071]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Dec 13 04:07:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Dec 13 04:07:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Dec 13 04:07:17 compute-0 ceph-mon[75071]: osdmap e126: 3 total, 3 up, 3 in
Dec 13 04:07:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Dec 13 04:07:17 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Dec 13 04:07:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Dec 13 04:07:18 compute-0 ceph-mon[75071]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:18 compute-0 ceph-mon[75071]: osdmap e127: 3 total, 3 up, 3 in
Dec 13 04:07:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Dec 13 04:07:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Dec 13 04:07:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 13 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Dec 13 04:07:19 compute-0 ceph-mon[75071]: osdmap e128: 3 total, 3 up, 3 in
Dec 13 04:07:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:20 compute-0 ceph-mon[75071]: pgmap v827: 305 pgs: 305 active+clean; 13 MiB data, 144 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 29 op/s
Dec 13 04:07:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Dec 13 04:07:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Dec 13 04:07:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Dec 13 04:07:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 21 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 38 op/s
Dec 13 04:07:22 compute-0 ceph-mon[75071]: osdmap e129: 3 total, 3 up, 3 in
Dec 13 04:07:22 compute-0 ceph-mon[75071]: pgmap v829: 305 pgs: 305 active+clean; 21 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.3 MiB/s wr, 38 op/s
Dec 13 04:07:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 21 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.4 MiB/s wr, 30 op/s
Dec 13 04:07:23 compute-0 podman[248108]: 2025-12-13 04:07:23.933956166 +0000 UTC m=+0.082294903 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 04:07:24 compute-0 ceph-mon[75071]: pgmap v830: 305 pgs: 305 active+clean; 21 MiB data, 153 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.4 MiB/s wr, 30 op/s
Dec 13 04:07:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Dec 13 04:07:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e129 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Dec 13 04:07:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Dec 13 04:07:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Dec 13 04:07:26 compute-0 ceph-mon[75071]: pgmap v831: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 5.3 MiB/s wr, 49 op/s
Dec 13 04:07:26 compute-0 ceph-mon[75071]: osdmap e130: 3 total, 3 up, 3 in
Dec 13 04:07:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 25 op/s
Dec 13 04:07:28 compute-0 ceph-mon[75071]: pgmap v833: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 3.6 MiB/s wr, 25 op/s
Dec 13 04:07:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 23 op/s
Dec 13 04:07:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:30 compute-0 ceph-mon[75071]: pgmap v834: 305 pgs: 305 active+clean; 41 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 2.5 MiB/s wr, 23 op/s
Dec 13 04:07:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 13 04:07:32 compute-0 ceph-mon[75071]: pgmap v835: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 13 04:07:32 compute-0 podman[248127]: 2025-12-13 04:07:32.902565425 +0000 UTC m=+0.051603333 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 04:07:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 13 04:07:34 compute-0 ceph-mon[75071]: pgmap v836: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 2.0 MiB/s wr, 19 op/s
Dec 13 04:07:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:35.081 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:07:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:35.082 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:07:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:35.082 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:07:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:36 compute-0 ceph-mon[75071]: pgmap v837: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:38 compute-0 ceph-mon[75071]: pgmap v838: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:38.965 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:07:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:38.968 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:07:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:40 compute-0 ceph-mon[75071]: pgmap v839: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:07:40
Dec 13 04:07:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:07:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:07:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'images', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'vms', '.mgr']
Dec 13 04:07:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:07:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:41 compute-0 ceph-osd[87731]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000034s
Dec 13 04:07:41 compute-0 ceph-osd[86683]: bluestore.MempoolThread fragmentation_score=0.000125 took=0.000016s
Dec 13 04:07:41 compute-0 ceph-osd[85653]: bluestore.MempoolThread fragmentation_score=0.000116 took=0.000011s
Dec 13 04:07:41 compute-0 podman[248148]: 2025-12-13 04:07:41.926485258 +0000 UTC m=+0.075404144 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:07:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:07:42 compute-0 ceph-mon[75071]: pgmap v840: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:44 compute-0 ceph-mon[75071]: pgmap v841: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:44 compute-0 sudo[248174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:07:44 compute-0 sudo[248174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:44 compute-0 sudo[248174]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:44 compute-0 sudo[248199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 04:07:44 compute-0 sudo[248199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:45 compute-0 podman[248267]: 2025-12-13 04:07:45.28887354 +0000 UTC m=+0.058110661 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:07:45 compute-0 podman[248267]: 2025-12-13 04:07:45.393517032 +0000 UTC m=+0.162754123 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:07:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Dec 13 04:07:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:07:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1308792574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:07:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1308792574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:46 compute-0 sudo[248199]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:07:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:07:46 compute-0 ceph-mon[75071]: pgmap v842: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1308792574' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1308792574' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:07:46.971 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:47 compute-0 sudo[248456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:07:47 compute-0 sudo[248456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:47 compute-0 sudo[248456]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:47 compute-0 sudo[248481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:07:47 compute-0 sudo[248481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:47 compute-0 sudo[248481]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:07:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:47 compute-0 ceph-mon[75071]: pgmap v843: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:07:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:07:47 compute-0 sudo[248537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:07:47 compute-0 sudo[248537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:47 compute-0 sudo[248537]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:47 compute-0 sudo[248562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:07:47 compute-0 sudo[248562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:48 compute-0 podman[248599]: 2025-12-13 04:07:48.07982388 +0000 UTC m=+0.027697529 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:48 compute-0 podman[248599]: 2025-12-13 04:07:48.215838881 +0000 UTC m=+0.163712530 container create 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:07:48 compute-0 systemd[1]: Started libpod-conmon-7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e.scope.
Dec 13 04:07:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:48 compute-0 podman[248599]: 2025-12-13 04:07:48.315858607 +0000 UTC m=+0.263732256 container init 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:07:48 compute-0 podman[248599]: 2025-12-13 04:07:48.323873647 +0000 UTC m=+0.271747276 container start 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:07:48 compute-0 podman[248599]: 2025-12-13 04:07:48.327078794 +0000 UTC m=+0.274952443 container attach 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:07:48 compute-0 adoring_murdock[248615]: 167 167
Dec 13 04:07:48 compute-0 systemd[1]: libpod-7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e.scope: Deactivated successfully.
Dec 13 04:07:48 compute-0 podman[248620]: 2025-12-13 04:07:48.369832515 +0000 UTC m=+0.025951402 container died 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-4211ace62b62f3e7cc9f873792f7eedfa0c76b01a041c40a4d13656131203310-merged.mount: Deactivated successfully.
Dec 13 04:07:48 compute-0 podman[248620]: 2025-12-13 04:07:48.40767621 +0000 UTC m=+0.063795047 container remove 7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:48 compute-0 systemd[1]: libpod-conmon-7806430404a49a0325db1bf2f7a02b4bbfed6227de8904ac7c410d6a7f07f55e.scope: Deactivated successfully.
Dec 13 04:07:48 compute-0 podman[248642]: 2025-12-13 04:07:48.59737783 +0000 UTC m=+0.043751058 container create 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:48 compute-0 systemd[1]: Started libpod-conmon-94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77.scope.
Dec 13 04:07:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:48 compute-0 podman[248642]: 2025-12-13 04:07:48.581586128 +0000 UTC m=+0.027959386 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:48 compute-0 podman[248642]: 2025-12-13 04:07:48.684771241 +0000 UTC m=+0.131144549 container init 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:48 compute-0 podman[248642]: 2025-12-13 04:07:48.693473189 +0000 UTC m=+0.139846437 container start 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:07:48 compute-0 podman[248642]: 2025-12-13 04:07:48.69678929 +0000 UTC m=+0.143162538 container attach 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:07:49 compute-0 nice_grothendieck[248658]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:07:49 compute-0 nice_grothendieck[248658]: --> All data devices are unavailable
Dec 13 04:07:49 compute-0 systemd[1]: libpod-94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77.scope: Deactivated successfully.
Dec 13 04:07:49 compute-0 podman[248642]: 2025-12-13 04:07:49.205085467 +0000 UTC m=+0.651458695 container died 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:07:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-225e11edd67fca1336b6c9217c6785a56d370b31041d2592973da2d0deeae556-merged.mount: Deactivated successfully.
Dec 13 04:07:49 compute-0 podman[248642]: 2025-12-13 04:07:49.564142701 +0000 UTC m=+1.010515929 container remove 94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_grothendieck, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:07:49 compute-0 systemd[1]: libpod-conmon-94ac3e90924c30ef69897cb43e4568f2d0d567f9e9c8c685796a9149463cee77.scope: Deactivated successfully.
Dec 13 04:07:49 compute-0 sudo[248562]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:49 compute-0 sudo[248691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:07:49 compute-0 sudo[248691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:49 compute-0 sudo[248691]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:49 compute-0 sudo[248716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:07:49 compute-0 sudo[248716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.06376779 +0000 UTC m=+0.086927330 container create a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.008827747 +0000 UTC m=+0.031987367 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:50 compute-0 systemd[1]: Started libpod-conmon-a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe.scope.
Dec 13 04:07:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.288975592 +0000 UTC m=+0.312135232 container init a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.299299633 +0000 UTC m=+0.322459213 container start a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:07:50 compute-0 jovial_shtern[248768]: 167 167
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.30535464 +0000 UTC m=+0.328514200 container attach a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:07:50 compute-0 systemd[1]: libpod-a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe.scope: Deactivated successfully.
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.307630101 +0000 UTC m=+0.330789661 container died a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d30b761b53ef9f6bb0ab1a188b066db3db5015bed637282320711554e09b4752-merged.mount: Deactivated successfully.
Dec 13 04:07:50 compute-0 podman[248752]: 2025-12-13 04:07:50.379719634 +0000 UTC m=+0.402879184 container remove a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:07:50 compute-0 systemd[1]: libpod-conmon-a952fb7b756c8f4a617e478b4d028ec4d94b1705786bd8bbc51cffeb13f7f2fe.scope: Deactivated successfully.
Dec 13 04:07:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Dec 13 04:07:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Dec 13 04:07:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Dec 13 04:07:50 compute-0 ceph-mon[75071]: pgmap v844: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:50 compute-0 podman[248792]: 2025-12-13 04:07:50.597485072 +0000 UTC m=+0.048069036 container create 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:07:50 compute-0 systemd[1]: Started libpod-conmon-0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72.scope.
Dec 13 04:07:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34062d4d98653b64f987d073d35ab76889859eb2b083abed262af8cd7d3c016/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34062d4d98653b64f987d073d35ab76889859eb2b083abed262af8cd7d3c016/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:50 compute-0 podman[248792]: 2025-12-13 04:07:50.579208202 +0000 UTC m=+0.029792176 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34062d4d98653b64f987d073d35ab76889859eb2b083abed262af8cd7d3c016/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f34062d4d98653b64f987d073d35ab76889859eb2b083abed262af8cd7d3c016/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:50 compute-0 podman[248792]: 2025-12-13 04:07:50.691365 +0000 UTC m=+0.141948984 container init 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:07:50 compute-0 podman[248792]: 2025-12-13 04:07:50.697363164 +0000 UTC m=+0.147947128 container start 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:07:50 compute-0 podman[248792]: 2025-12-13 04:07:50.700340447 +0000 UTC m=+0.150924461 container attach 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]: {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     "0": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "devices": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "/dev/loop3"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             ],
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_name": "ceph_lv0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_size": "21470642176",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "name": "ceph_lv0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "tags": {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_name": "ceph",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.crush_device_class": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.encrypted": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.objectstore": "bluestore",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_id": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.vdo": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.with_tpm": "0"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             },
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "vg_name": "ceph_vg0"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         }
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     ],
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     "1": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "devices": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "/dev/loop4"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             ],
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_name": "ceph_lv1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_size": "21470642176",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "name": "ceph_lv1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "tags": {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_name": "ceph",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.crush_device_class": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.encrypted": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.objectstore": "bluestore",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_id": "1",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.vdo": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.with_tpm": "0"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             },
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "vg_name": "ceph_vg1"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         }
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     ],
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     "2": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "devices": [
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "/dev/loop5"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             ],
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_name": "ceph_lv2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_size": "21470642176",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "name": "ceph_lv2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "tags": {
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.cluster_name": "ceph",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.crush_device_class": "",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.encrypted": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.objectstore": "bluestore",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osd_id": "2",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.vdo": "0",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:                 "ceph.with_tpm": "0"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             },
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "type": "block",
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:             "vg_name": "ceph_vg2"
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:         }
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]:     ]
Dec 13 04:07:50 compute-0 modest_chatterjee[248808]: }
Dec 13 04:07:51 compute-0 systemd[1]: libpod-0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72.scope: Deactivated successfully.
Dec 13 04:07:51 compute-0 podman[248792]: 2025-12-13 04:07:51.010947824 +0000 UTC m=+0.461531808 container died 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-f34062d4d98653b64f987d073d35ab76889859eb2b083abed262af8cd7d3c016-merged.mount: Deactivated successfully.
Dec 13 04:07:51 compute-0 podman[248792]: 2025-12-13 04:07:51.071358397 +0000 UTC m=+0.521942401 container remove 0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:51 compute-0 systemd[1]: libpod-conmon-0ec661753b7afcb97f75c7ed2c3eacc52377d89af35924377c714c824ed74a72.scope: Deactivated successfully.
Dec 13 04:07:51 compute-0 sudo[248716]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:51 compute-0 sudo[248828]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:07:51 compute-0 sudo[248828]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:51 compute-0 sudo[248828]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:51 compute-0 sudo[248853]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:07:51 compute-0 sudo[248853]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Dec 13 04:07:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Dec 13 04:07:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Dec 13 04:07:51 compute-0 ceph-mon[75071]: osdmap e131: 3 total, 3 up, 3 in
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.584932768 +0000 UTC m=+0.045544627 container create 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:07:51 compute-0 systemd[1]: Started libpod-conmon-703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9.scope.
Dec 13 04:07:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.563857202 +0000 UTC m=+0.024469091 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.666660594 +0000 UTC m=+0.127272483 container init 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.672852824 +0000 UTC m=+0.133464683 container start 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.676143324 +0000 UTC m=+0.136755193 container attach 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:07:51 compute-0 brave_jackson[248907]: 167 167
Dec 13 04:07:51 compute-0 systemd[1]: libpod-703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9.scope: Deactivated successfully.
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.678637092 +0000 UTC m=+0.139248951 container died 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad819fedced8b9393027fd4eda983b96f1b7cebc532077536c7bab47791a278d-merged.mount: Deactivated successfully.
Dec 13 04:07:51 compute-0 podman[248890]: 2025-12-13 04:07:51.719445858 +0000 UTC m=+0.180057727 container remove 703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:07:51 compute-0 systemd[1]: libpod-conmon-703bff840b0f1cbd5232bca8343a71cd28f1f1a7c9f959684e7022630e0346b9.scope: Deactivated successfully.
Dec 13 04:07:51 compute-0 podman[248933]: 2025-12-13 04:07:51.866983735 +0000 UTC m=+0.039331057 container create 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:51 compute-0 systemd[1]: Started libpod-conmon-6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977.scope.
Dec 13 04:07:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e429f9b5d635bd885e740d0bed0f4fc23a3a3bb169d2a2f58969ceafaca184/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e429f9b5d635bd885e740d0bed0f4fc23a3a3bb169d2a2f58969ceafaca184/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e429f9b5d635bd885e740d0bed0f4fc23a3a3bb169d2a2f58969ceafaca184/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e0e429f9b5d635bd885e740d0bed0f4fc23a3a3bb169d2a2f58969ceafaca184/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:07:51 compute-0 podman[248933]: 2025-12-13 04:07:51.848257472 +0000 UTC m=+0.020604834 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:07:51 compute-0 podman[248933]: 2025-12-13 04:07:51.948431823 +0000 UTC m=+0.120779165 container init 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:07:51 compute-0 podman[248933]: 2025-12-13 04:07:51.95633955 +0000 UTC m=+0.128686872 container start 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:07:51 compute-0 podman[248933]: 2025-12-13 04:07:51.960186255 +0000 UTC m=+0.132533577 container attach 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659195014350992 of space, bias 1.0, pg target 0.19977585043052976 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4198767981305986e-06 of space, bias 4.0, pg target 0.0017038521577567183 quantized to 16 (current 16)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:07:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:07:52 compute-0 ceph-mon[75071]: pgmap v846: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:52 compute-0 ceph-mon[75071]: osdmap e132: 3 total, 3 up, 3 in
Dec 13 04:07:52 compute-0 lvm[249027]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:07:52 compute-0 lvm[249028]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:07:52 compute-0 lvm[249028]: VG ceph_vg1 finished
Dec 13 04:07:52 compute-0 lvm[249027]: VG ceph_vg0 finished
Dec 13 04:07:52 compute-0 lvm[249030]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:07:52 compute-0 lvm[249030]: VG ceph_vg2 finished
Dec 13 04:07:52 compute-0 bold_maxwell[248949]: {}
Dec 13 04:07:52 compute-0 systemd[1]: libpod-6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977.scope: Deactivated successfully.
Dec 13 04:07:52 compute-0 systemd[1]: libpod-6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977.scope: Consumed 1.295s CPU time.
Dec 13 04:07:52 compute-0 podman[249033]: 2025-12-13 04:07:52.785789003 +0000 UTC m=+0.026186677 container died 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0e429f9b5d635bd885e740d0bed0f4fc23a3a3bb169d2a2f58969ceafaca184-merged.mount: Deactivated successfully.
Dec 13 04:07:52 compute-0 podman[249033]: 2025-12-13 04:07:52.826030094 +0000 UTC m=+0.066427678 container remove 6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:07:52 compute-0 systemd[1]: libpod-conmon-6987f48f42dbb0d406816df76677f2f64e2573f9e73e7a8aa541a50ab5f92977.scope: Deactivated successfully.
Dec 13 04:07:52 compute-0 sudo[248853]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:07:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:07:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:52 compute-0 sudo[249048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:07:52 compute-0 sudo[249048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:07:52 compute-0 sudo[249048]: pam_unix(sudo:session): session closed for user root
Dec 13 04:07:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:07:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1365193348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:07:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1365193348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:07:53 compute-0 ceph-mon[75071]: pgmap v848: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:07:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1365193348' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1365193348' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:54 compute-0 podman[249073]: 2025-12-13 04:07:54.90781012 +0000 UTC m=+0.054564804 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:07:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:07:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3352081708' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:07:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3352081708' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3352081708' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:07:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3352081708' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:07:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 13 04:07:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:07:56 compute-0 ceph-mon[75071]: pgmap v849: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 13 04:07:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 13 04:07:58 compute-0 ceph-mon[75071]: pgmap v850: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.1 KiB/s wr, 21 op/s
Dec 13 04:07:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Dec 13 04:08:00 compute-0 ceph-mon[75071]: pgmap v851: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Dec 13 04:08:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e132 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Dec 13 04:08:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Dec 13 04:08:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Dec 13 04:08:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Dec 13 04:08:01 compute-0 ceph-mon[75071]: osdmap e133: 3 total, 3 up, 3 in
Dec 13 04:08:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/226870634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/226870634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:02 compute-0 ceph-mon[75071]: pgmap v853: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Dec 13 04:08:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/226870634' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/226870634' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Dec 13 04:08:03 compute-0 podman[249093]: 2025-12-13 04:08:03.913898373 +0000 UTC m=+0.057897444 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:08:04 compute-0 ceph-mon[75071]: pgmap v854: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 13 KiB/s rd, 1.3 KiB/s wr, 18 op/s
Dec 13 04:08:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Dec 13 04:08:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.909 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.910 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.910 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.910 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:08:05 compute-0 nova_compute[243704]: 2025-12-13 04:08:05.911 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3515896696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3515896696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/260371557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.480 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.627 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.628 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5121MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.628 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.628 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:06 compute-0 ceph-mon[75071]: pgmap v855: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Dec 13 04:08:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3515896696' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3515896696' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/260371557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.705 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.706 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:08:06 compute-0 nova_compute[243704]: 2025-12-13 04:08:06.731 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/778399197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:07 compute-0 nova_compute[243704]: 2025-12-13 04:08:07.270 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:07 compute-0 nova_compute[243704]: 2025-12-13 04:08:07.275 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:08:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Dec 13 04:08:07 compute-0 nova_compute[243704]: 2025-12-13 04:08:07.295 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:08:07 compute-0 nova_compute[243704]: 2025-12-13 04:08:07.297 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:08:07 compute-0 nova_compute[243704]: 2025-12-13 04:08:07.297 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/778399197' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:07 compute-0 ceph-mon[75071]: pgmap v856: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 KiB/s wr, 35 op/s
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.298 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.298 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.298 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.313 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.314 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.890 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.890 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:08 compute-0 nova_compute[243704]: 2025-12-13 04:08:08.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:08:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Dec 13 04:08:09 compute-0 nova_compute[243704]: 2025-12-13 04:08:09.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:08:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:10 compute-0 ceph-mon[75071]: pgmap v857: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.9 KiB/s wr, 37 op/s
Dec 13 04:08:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:12 compute-0 ceph-mon[75071]: pgmap v858: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 1.6 KiB/s wr, 46 op/s
Dec 13 04:08:12 compute-0 podman[249157]: 2025-12-13 04:08:12.944835032 +0000 UTC m=+0.094047641 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller)
Dec 13 04:08:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 13 04:08:14 compute-0 ceph-mon[75071]: pgmap v859: 305 pgs: 305 active+clean; 41 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 42 op/s
Dec 13 04:08:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 47 op/s
Dec 13 04:08:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Dec 13 04:08:16 compute-0 ceph-mon[75071]: pgmap v860: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 23 KiB/s wr, 47 op/s
Dec 13 04:08:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Dec 13 04:08:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Dec 13 04:08:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 27 KiB/s wr, 22 op/s
Dec 13 04:08:17 compute-0 ceph-mon[75071]: osdmap e134: 3 total, 3 up, 3 in
Dec 13 04:08:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Dec 13 04:08:18 compute-0 ceph-mon[75071]: pgmap v862: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 27 KiB/s wr, 22 op/s
Dec 13 04:08:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Dec 13 04:08:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Dec 13 04:08:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 33 KiB/s wr, 11 op/s
Dec 13 04:08:19 compute-0 ceph-mon[75071]: osdmap e135: 3 total, 3 up, 3 in
Dec 13 04:08:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Dec 13 04:08:20 compute-0 ceph-mon[75071]: pgmap v864: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 33 KiB/s wr, 11 op/s
Dec 13 04:08:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Dec 13 04:08:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Dec 13 04:08:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Dec 13 04:08:21 compute-0 ceph-mon[75071]: osdmap e136: 3 total, 3 up, 3 in
Dec 13 04:08:21 compute-0 ceph-mon[75071]: pgmap v866: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.248 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.249 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.273 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.418 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.418 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.428 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.429 243708 INFO nova.compute.claims [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:08:22 compute-0 nova_compute[243704]: 2025-12-13 04:08:22.536 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Dec 13 04:08:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Dec 13 04:08:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Dec 13 04:08:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:08:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317056019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.088 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.096 243708 DEBUG nova.compute.provider_tree [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.108 243708 DEBUG nova.scheduler.client.report [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.129 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.130 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.175 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.176 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.196 243708 INFO nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.210 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:08:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.293 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.294 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.295 243708 INFO nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Creating image(s)
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.329 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.349 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.367 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.370 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:23 compute-0 nova_compute[243704]: 2025-12-13 04:08:23.371 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:23 compute-0 ceph-mon[75071]: osdmap e137: 3 total, 3 up, 3 in
Dec 13 04:08:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2317056019' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:08:23 compute-0 ceph-mon[75071]: pgmap v868: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Dec 13 04:08:24 compute-0 nova_compute[243704]: 2025-12-13 04:08:24.185 243708 WARNING oslo_policy.policy [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 13 04:08:24 compute-0 nova_compute[243704]: 2025-12-13 04:08:24.186 243708 WARNING oslo_policy.policy [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Dec 13 04:08:24 compute-0 nova_compute[243704]: 2025-12-13 04:08:24.188 243708 DEBUG nova.policy [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '41c24c5943904540a40a3dfbcc716adb', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '96989182ef434b49aedf94176f4ddd6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:08:24 compute-0 nova_compute[243704]: 2025-12-13 04:08:24.393 243708 DEBUG nova.virt.libvirt.imagebackend [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Image locations are: [{'url': 'rbd://437a9f04-06b7-56e3-8a4b-f52a1199dd32/images/36cf6469-9e96-4186-bf30-37c785f25db6/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://437a9f04-06b7-56e3-8a4b-f52a1199dd32/images/36cf6469-9e96-4186-bf30-37c785f25db6/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 04:08:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Dec 13 04:08:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Dec 13 04:08:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Dec 13 04:08:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.3 KiB/s wr, 80 op/s
Dec 13 04:08:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.618 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.676 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.part --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.677 243708 DEBUG nova.virt.images [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] 36cf6469-9e96-4186-bf30-37c785f25db6 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.679 243708 DEBUG nova.privsep.utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.679 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.part /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.787 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Successfully created port: 1cc0804c-1371-48e5-a964-354f99f7eace _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:08:25 compute-0 ceph-mon[75071]: osdmap e138: 3 total, 3 up, 3 in
Dec 13 04:08:25 compute-0 ceph-mon[75071]: pgmap v870: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 4.3 KiB/s wr, 80 op/s
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.893 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.part /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.converted" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.897 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:25 compute-0 podman[249269]: 2025-12-13 04:08:25.939225488 +0000 UTC m=+0.089068056 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.951 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0.converted --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.952 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.581s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.970 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:25 compute-0 nova_compute[243704]: 2025-12-13 04:08:25.973 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 143ff7a5-b045-4330-945a-cab9a1074156_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Dec 13 04:08:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Dec 13 04:08:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Dec 13 04:08:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Dec 13 04:08:27 compute-0 nova_compute[243704]: 2025-12-13 04:08:27.815 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Successfully updated port: 1cc0804c-1371-48e5-a964-354f99f7eace _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:08:27 compute-0 nova_compute[243704]: 2025-12-13 04:08:27.935 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:08:27 compute-0 nova_compute[243704]: 2025-12-13 04:08:27.935 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquired lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:08:27 compute-0 nova_compute[243704]: 2025-12-13 04:08:27.935 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:08:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Dec 13 04:08:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Dec 13 04:08:27 compute-0 ceph-mon[75071]: osdmap e139: 3 total, 3 up, 3 in
Dec 13 04:08:27 compute-0 ceph-mon[75071]: pgmap v872: 305 pgs: 305 active+clean; 42 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.3 KiB/s wr, 56 op/s
Dec 13 04:08:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Dec 13 04:08:28 compute-0 nova_compute[243704]: 2025-12-13 04:08:28.133 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:08:28 compute-0 nova_compute[243704]: 2025-12-13 04:08:28.364 243708 DEBUG nova.compute.manager [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-changed-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:08:28 compute-0 nova_compute[243704]: 2025-12-13 04:08:28.364 243708 DEBUG nova.compute.manager [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Refreshing instance network info cache due to event network-changed-1cc0804c-1371-48e5-a964-354f99f7eace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:08:28 compute-0 nova_compute[243704]: 2025-12-13 04:08:28.365 243708 DEBUG oslo_concurrency.lockutils [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:08:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1301022914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:29 compute-0 ceph-mon[75071]: osdmap e140: 3 total, 3 up, 3 in
Dec 13 04:08:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1301022914' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.172 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 143ff7a5-b045-4330-945a-cab9a1074156_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.231 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] resizing rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:08:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 55 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 738 KiB/s wr, 95 op/s
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.825 243708 DEBUG nova.objects.instance [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'migration_context' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.841 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.841 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Ensure instance console log exists: /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.842 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.842 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.842 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:29 compute-0 nova_compute[243704]: 2025-12-13 04:08:29.946 243708 DEBUG nova.network.neutron [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating instance_info_cache with network_info: [{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.033 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Releasing lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.034 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Instance network_info: |[{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.035 243708 DEBUG oslo_concurrency.lockutils [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.036 243708 DEBUG nova.network.neutron [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Refreshing network info cache for port 1cc0804c-1371-48e5-a964-354f99f7eace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.042 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Start _get_guest_xml network_info=[{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.049 243708 WARNING nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.058 243708 DEBUG nova.virt.libvirt.host [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.059 243708 DEBUG nova.virt.libvirt.host [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.063 243708 DEBUG nova.virt.libvirt.host [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.063 243708 DEBUG nova.virt.libvirt.host [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.064 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.065 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.065 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.066 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.066 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.066 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.066 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.067 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.067 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.067 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.068 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.068 243708 DEBUG nova.virt.hardware [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.073 243708 DEBUG nova.privsep.utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.074 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Dec 13 04:08:30 compute-0 ceph-mon[75071]: pgmap v874: 305 pgs: 305 active+clean; 55 MiB data, 178 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 738 KiB/s wr, 95 op/s
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Dec 13 04:08:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Dec 13 04:08:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Dec 13 04:08:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1910670500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.616 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.940 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:30 compute-0 nova_compute[243704]: 2025-12-13 04:08:30.943 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:31 compute-0 ceph-mon[75071]: osdmap e141: 3 total, 3 up, 3 in
Dec 13 04:08:31 compute-0 ceph-mon[75071]: osdmap e142: 3 total, 3 up, 3 in
Dec 13 04:08:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1910670500' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 69 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Dec 13 04:08:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1979143493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.451 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.454 243708 DEBUG nova.virt.libvirt.vif [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:08:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1936735685',id=1,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHsVPv3Q3QmSERXw6wBHyojoi58ygBNRPbUX5Jiszo88WR5vuVwf9fb/eGYHEl8SzOTu3kq+/kG9FHsKCXW7n7qmr52loi4dv4wJ7B4jcrZfDznFoQokZ4oC87/EJuL0wQ==',key_name='tempest-keypair-1094499998',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96989182ef434b49aedf94176f4ddd6f',ramdisk_id='',reservation_id='r-zbfnpxg1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-709780275',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-709780275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:08:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41c24c5943904540a40a3dfbcc716adb',uuid=143ff7a5-b045-4330-945a-cab9a1074156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.455 243708 DEBUG nova.network.os_vif_util [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converting VIF {"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.457 243708 DEBUG nova.network.os_vif_util [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.462 243708 DEBUG nova.objects.instance [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'pci_devices' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.479 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <uuid>143ff7a5-b045-4330-945a-cab9a1074156</uuid>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <name>instance-00000001</name>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:name>tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685</nova:name>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:08:30</nova:creationTime>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:user uuid="41c24c5943904540a40a3dfbcc716adb">tempest-EncryptedVolumesExtendAttachedTest-709780275-project-member</nova:user>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:project uuid="96989182ef434b49aedf94176f4ddd6f">tempest-EncryptedVolumesExtendAttachedTest-709780275</nova:project>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <nova:port uuid="1cc0804c-1371-48e5-a964-354f99f7eace">
Dec 13 04:08:31 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <system>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="serial">143ff7a5-b045-4330-945a-cab9a1074156</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="uuid">143ff7a5-b045-4330-945a-cab9a1074156</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </system>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <os>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </os>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <features>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </features>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/143ff7a5-b045-4330-945a-cab9a1074156_disk">
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </source>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/143ff7a5-b045-4330-945a-cab9a1074156_disk.config">
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </source>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:08:31 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:32:f9:75"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <target dev="tap1cc0804c-13"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/console.log" append="off"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <video>
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </video>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:08:31 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:08:31 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:08:31 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:08:31 compute-0 nova_compute[243704]: </domain>
Dec 13 04:08:31 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.481 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Preparing to wait for external event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.482 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.482 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.482 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.483 243708 DEBUG nova.virt.libvirt.vif [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:08:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1936735685',id=1,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHsVPv3Q3QmSERXw6wBHyojoi58ygBNRPbUX5Jiszo88WR5vuVwf9fb/eGYHEl8SzOTu3kq+/kG9FHsKCXW7n7qmr52loi4dv4wJ7B4jcrZfDznFoQokZ4oC87/EJuL0wQ==',key_name='tempest-keypair-1094499998',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='96989182ef434b49aedf94176f4ddd6f',ramdisk_id='',reservation_id='r-zbfnpxg1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-709780275',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-709780275-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:08:23Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41c24c5943904540a40a3dfbcc716adb',uuid=143ff7a5-b045-4330-945a-cab9a1074156,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.484 243708 DEBUG nova.network.os_vif_util [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converting VIF {"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.484 243708 DEBUG nova.network.os_vif_util [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.485 243708 DEBUG os_vif [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.528 243708 DEBUG ovsdbapp.backend.ovs_idl [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.528 243708 DEBUG ovsdbapp.backend.ovs_idl [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.529 243708 DEBUG ovsdbapp.backend.ovs_idl [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.529 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.530 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [POLLOUT] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.530 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.531 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.533 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.535 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.543 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.544 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.544 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:08:31 compute-0 nova_compute[243704]: 2025-12-13 04:08:31.545 243708 INFO oslo.privsep.daemon [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpwidlswvp/privsep.sock']
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.216 243708 INFO oslo.privsep.daemon [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Spawned new privsep daemon via rootwrap
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.091 249470 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.094 249470 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.096 249470 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.096 249470 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249470
Dec 13 04:08:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Dec 13 04:08:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Dec 13 04:08:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Dec 13 04:08:32 compute-0 ceph-mon[75071]: pgmap v877: 305 pgs: 305 active+clean; 69 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Dec 13 04:08:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1979143493' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.560 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.562 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1cc0804c-13, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.563 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1cc0804c-13, col_values=(('external_ids', {'iface-id': '1cc0804c-1371-48e5-a964-354f99f7eace', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:f9:75', 'vm-uuid': '143ff7a5-b045-4330-945a-cab9a1074156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.564 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:32 compute-0 NetworkManager[48899]: <info>  [1765598912.5666] manager: (tap1cc0804c-13): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.568 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.572 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.573 243708 INFO os_vif [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13')
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.963 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.963 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.964 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No VIF found with MAC fa:16:3e:32:f9:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.965 243708 INFO nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Using config drive
Dec 13 04:08:32 compute-0 nova_compute[243704]: 2025-12-13 04:08:32.994 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:33 compute-0 nova_compute[243704]: 2025-12-13 04:08:33.000 243708 DEBUG nova.network.neutron [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updated VIF entry in instance network info cache for port 1cc0804c-1371-48e5-a964-354f99f7eace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:08:33 compute-0 nova_compute[243704]: 2025-12-13 04:08:33.000 243708 DEBUG nova.network.neutron [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating instance_info_cache with network_info: [{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:08:33 compute-0 nova_compute[243704]: 2025-12-13 04:08:33.023 243708 DEBUG oslo_concurrency.lockutils [req-d1ad9bb3-359b-40ab-87cb-3c5cf226bab4 req-8e6f1f48-1073-4635-900e-6430e6de250a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:08:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 69 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 104 op/s
Dec 13 04:08:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Dec 13 04:08:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Dec 13 04:08:33 compute-0 ceph-mon[75071]: osdmap e143: 3 total, 3 up, 3 in
Dec 13 04:08:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Dec 13 04:08:34 compute-0 nova_compute[243704]: 2025-12-13 04:08:34.645 243708 INFO nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Creating config drive at /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config
Dec 13 04:08:34 compute-0 nova_compute[243704]: 2025-12-13 04:08:34.651 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhxvu9b6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:34 compute-0 nova_compute[243704]: 2025-12-13 04:08:34.785 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyhxvu9b6" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:34 compute-0 nova_compute[243704]: 2025-12-13 04:08:34.815 243708 DEBUG nova.storage.rbd_utils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] rbd image 143ff7a5-b045-4330-945a-cab9a1074156_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:08:34 compute-0 nova_compute[243704]: 2025-12-13 04:08:34.820 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config 143ff7a5-b045-4330-945a-cab9a1074156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:08:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Dec 13 04:08:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Dec 13 04:08:34 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Dec 13 04:08:34 compute-0 ceph-mon[75071]: pgmap v879: 305 pgs: 305 active+clean; 69 MiB data, 186 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 104 op/s
Dec 13 04:08:34 compute-0 ceph-mon[75071]: osdmap e144: 3 total, 3 up, 3 in
Dec 13 04:08:34 compute-0 podman[249516]: 2025-12-13 04:08:34.955325141 +0000 UTC m=+0.088115079 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.082 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.083 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.083 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:35 compute-0 nova_compute[243704]: 2025-12-13 04:08:35.205 243708 DEBUG oslo_concurrency.processutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config 143ff7a5-b045-4330-945a-cab9a1074156_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.385s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:08:35 compute-0 nova_compute[243704]: 2025-12-13 04:08:35.206 243708 INFO nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Deleting local config drive /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156/disk.config because it was imported into RBD.
Dec 13 04:08:35 compute-0 systemd[1]: Starting libvirt secret daemon...
Dec 13 04:08:35 compute-0 systemd[1]: Started libvirt secret daemon.
Dec 13 04:08:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Dec 13 04:08:35 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec 13 04:08:35 compute-0 kernel: tap1cc0804c-13: entered promiscuous mode
Dec 13 04:08:35 compute-0 NetworkManager[48899]: <info>  [1765598915.3395] manager: (tap1cc0804c-13): new Tun device (/org/freedesktop/NetworkManager/Devices/22)
Dec 13 04:08:35 compute-0 ovn_controller[145204]: 2025-12-13T04:08:35Z|00027|binding|INFO|Claiming lport 1cc0804c-1371-48e5-a964-354f99f7eace for this chassis.
Dec 13 04:08:35 compute-0 ovn_controller[145204]: 2025-12-13T04:08:35Z|00028|binding|INFO|1cc0804c-1371-48e5-a964-354f99f7eace: Claiming fa:16:3e:32:f9:75 10.100.0.6
Dec 13 04:08:35 compute-0 nova_compute[243704]: 2025-12-13 04:08:35.341 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:35 compute-0 nova_compute[243704]: 2025-12-13 04:08:35.343 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:35 compute-0 systemd-udevd[249589]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:08:35 compute-0 NetworkManager[48899]: <info>  [1765598915.3904] device (tap1cc0804c-13): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:08:35 compute-0 NetworkManager[48899]: <info>  [1765598915.3911] device (tap1cc0804c-13): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:08:35 compute-0 systemd-machined[206767]: New machine qemu-1-instance-00000001.
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.395 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f9:75 10.100.0.6'], port_security=['fa:16:3e:32:f9:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '143ff7a5-b045-4330-945a-cab9a1074156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96989182ef434b49aedf94176f4ddd6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '29f547ce-3dec-403d-aee0-387394e47410', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4365dfe4-bb45-4bfa-b597-162b137c7810, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=1cc0804c-1371-48e5-a964-354f99f7eace) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.397 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 1cc0804c-1371-48e5-a964-354f99f7eace in datapath d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 bound to our chassis
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.399 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0ec29d2-698f-48c0-8337-ad9b2cdc9d73
Dec 13 04:08:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:35.401 154842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpdbejjf6c/privsep.sock']
Dec 13 04:08:35 compute-0 ovn_controller[145204]: 2025-12-13T04:08:35Z|00029|binding|INFO|Setting lport 1cc0804c-1371-48e5-a964-354f99f7eace ovn-installed in OVS
Dec 13 04:08:35 compute-0 ovn_controller[145204]: 2025-12-13T04:08:35Z|00030|binding|INFO|Setting lport 1cc0804c-1371-48e5-a964-354f99f7eace up in Southbound
Dec 13 04:08:35 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec 13 04:08:35 compute-0 nova_compute[243704]: 2025-12-13 04:08:35.426 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Dec 13 04:08:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Dec 13 04:08:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Dec 13 04:08:35 compute-0 ceph-mon[75071]: osdmap e145: 3 total, 3 up, 3 in
Dec 13 04:08:35 compute-0 ceph-mon[75071]: pgmap v882: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.9 MiB/s wr, 49 op/s
Dec 13 04:08:35 compute-0 ceph-mon[75071]: osdmap e146: 3 total, 3 up, 3 in
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.145 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765598916.1451714, 143ff7a5-b045-4330-945a-cab9a1074156 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.146 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] VM Started (Lifecycle Event)
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.177 154842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.178 154842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdbejjf6c/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.030 249645 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.035 249645 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.037 249645 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.038 249645 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249645
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.181 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9f5bb296-9f67-4bfe-bdcb-958821b14228]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.186 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.190 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765598916.1460907, 143ff7a5-b045-4330-945a-cab9a1074156 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.190 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] VM Paused (Lifecycle Event)
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.202 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.207 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.227 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.428 243708 DEBUG nova.compute.manager [req-2e3da192-c563-413a-aca2-2a94c424a8f9 req-a4712533-b20d-4ca6-9661-9eef8e722f85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.429 243708 DEBUG oslo_concurrency.lockutils [req-2e3da192-c563-413a-aca2-2a94c424a8f9 req-a4712533-b20d-4ca6-9661-9eef8e722f85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.430 243708 DEBUG oslo_concurrency.lockutils [req-2e3da192-c563-413a-aca2-2a94c424a8f9 req-a4712533-b20d-4ca6-9661-9eef8e722f85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.430 243708 DEBUG oslo_concurrency.lockutils [req-2e3da192-c563-413a-aca2-2a94c424a8f9 req-a4712533-b20d-4ca6-9661-9eef8e722f85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.430 243708 DEBUG nova.compute.manager [req-2e3da192-c563-413a-aca2-2a94c424a8f9 req-a4712533-b20d-4ca6-9661-9eef8e722f85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Processing event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.431 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.434 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.436 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765598916.4347467, 143ff7a5-b045-4330-945a-cab9a1074156 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.436 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] VM Resumed (Lifecycle Event)
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.445 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.449 243708 INFO nova.virt.libvirt.driver [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Instance spawned successfully.
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.449 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.478 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.484 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.510 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.522 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.522 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.523 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.523 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.523 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.523 243708 DEBUG nova.virt.libvirt.driver [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.782 243708 INFO nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Took 13.49 seconds to spawn the instance on the hypervisor.
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.783 243708 DEBUG nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.855 249645 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.856 249645 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:36.856 249645 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.916 243708 INFO nova.compute.manager [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Took 14.53 seconds to build instance.
Dec 13 04:08:36 compute-0 nova_compute[243704]: 2025-12-13 04:08:36.935 243708 DEBUG oslo_concurrency.lockutils [None req-3fe320ed-2101-4ba2-8fb2-61256a00b418 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Dec 13 04:08:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Dec 13 04:08:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Dec 13 04:08:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Dec 13 04:08:37 compute-0 nova_compute[243704]: 2025-12-13 04:08:37.566 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.573 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb5f026-1f54-4d89-8f87-54af412773e0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.575 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0ec29d2-61 in ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.577 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0ec29d2-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.578 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d8637221-be28-45dc-923c-d3f34d4f4064]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.582 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7bbc6490-cea0-48a3-b7fe-41185e2fbd1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.626 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c23b93-f209-4129-8f23-dec8aa97b3dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.646 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bd97c650-2d2c-419c-a1ec-0de5d19802c8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:37.649 154842 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpe2_kny0g/privsep.sock']
Dec 13 04:08:37 compute-0 ceph-mon[75071]: osdmap e147: 3 total, 3 up, 3 in
Dec 13 04:08:37 compute-0 ceph-mon[75071]: pgmap v885: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 59 op/s
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.471 154842 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.473 154842 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpe2_kny0g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.330 249665 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.339 249665 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.341 249665 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.342 249665 INFO oslo.privsep.daemon [-] privsep daemon running as pid 249665
Dec 13 04:08:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:38.476 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[12275e15-8ef4-4545-a941-c54ad81fe423]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.524 243708 DEBUG nova.compute.manager [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.525 243708 DEBUG oslo_concurrency.lockutils [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.525 243708 DEBUG oslo_concurrency.lockutils [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.525 243708 DEBUG oslo_concurrency.lockutils [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.526 243708 DEBUG nova.compute.manager [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] No waiting events found dispatching network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.526 243708 WARNING nova.compute.manager [req-d93de5ea-ca7d-4169-9b19-4abe505ad4d8 req-0a73b431-9ddb-4527-8101-aec6fa98bef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received unexpected event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace for instance with vm_state active and task_state None.
Dec 13 04:08:38 compute-0 nova_compute[243704]: 2025-12-13 04:08:38.952 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9533] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9538] device (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <warn>  [1765598918.9539] device (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9545] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9550] device (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <warn>  [1765598918.9551] device (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9563] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9572] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9580] device (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 04:08:38 compute-0 NetworkManager[48899]: <info>  [1765598918.9585] device (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec 13 04:08:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Dec 13 04:08:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Dec 13 04:08:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Dec 13 04:08:39 compute-0 nova_compute[243704]: 2025-12-13 04:08:39.051 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:39 compute-0 nova_compute[243704]: 2025-12-13 04:08:39.062 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3295357508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.159 249665 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.159 249665 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.159 249665 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:08:39 compute-0 nova_compute[243704]: 2025-12-13 04:08:39.294 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.294 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:08:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 41 KiB/s wr, 148 op/s
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.800 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8c0fda-52cc-4b10-80b8-6416bfc00f43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 NetworkManager[48899]: <info>  [1765598919.8186] manager: (tapd0ec29d2-60): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.817 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[04c08a9a-88b1-4b7f-a7ea-5cf7c299e6af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 systemd-udevd[249678]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.850 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[ed371ae6-6a46-41ad-87ce-28b8112371ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.853 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[a2cb2f7b-c70d-4534-93ff-015419740b14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 NetworkManager[48899]: <info>  [1765598919.8778] device (tapd0ec29d2-60): carrier: link connected
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.884 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c739d178-cfca-49b1-bdec-5805affa1129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.902 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1ac12d-344f-4d5b-bd43-7b7b9154db8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ec29d2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:3b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365855, 'reachable_time': 32978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249696, 'error': None, 'target': 'ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.919 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[87514e7d-5b10-404a-82ef-747b5d87e530]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3a:3b47'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 365855, 'tstamp': 365855}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249697, 'error': None, 'target': 'ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.935 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7a0a5471-b1ce-4bdd-83ad-443d64231f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0ec29d2-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3a:3b:47'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365855, 'reachable_time': 32978, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 249698, 'error': None, 'target': 'ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:39 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:39.964 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[70c7bc59-81cd-4e0a-ab21-0fdc1a3ab57f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Dec 13 04:08:40 compute-0 ceph-mon[75071]: osdmap e148: 3 total, 3 up, 3 in
Dec 13 04:08:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3295357508' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:40 compute-0 ceph-mon[75071]: pgmap v887: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 41 KiB/s wr, 148 op/s
Dec 13 04:08:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.026 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ac3ebb0b-1b6f-4d29-a5b9-d6d43ebe601e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.028 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ec29d2-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.029 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.029 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0ec29d2-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Dec 13 04:08:40 compute-0 kernel: tapd0ec29d2-60: entered promiscuous mode
Dec 13 04:08:40 compute-0 NetworkManager[48899]: <info>  [1765598920.0327] manager: (tapd0ec29d2-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.033 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.042 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0ec29d2-60, col_values=(('external_ids', {'iface-id': '71f86b81-c0f0-4e39-af1b-227d7cb20de9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:40 compute-0 ovn_controller[145204]: 2025-12-13T04:08:40Z|00031|binding|INFO|Releasing lport 71f86b81-c0f0-4e39-af1b-227d7cb20de9 from this chassis (sb_readonly=0)
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.054 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0ec29d2-698f-48c0-8337-ad9b2cdc9d73.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0ec29d2-698f-48c0-8337-ad9b2cdc9d73.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.056 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.057 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[35c84061-a325-45f4-a30e-85e21d656de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.059 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/d0ec29d2-698f-48c0-8337-ad9b2cdc9d73.pid.haproxy
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID d0ec29d2-698f-48c0-8337-ad9b2cdc9d73
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.061 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'env', 'PROCESS_TAG=haproxy-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0ec29d2-698f-48c0-8337-ad9b2cdc9d73.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:08:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e149 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Dec 13 04:08:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Dec 13 04:08:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Dec 13 04:08:40 compute-0 podman[249730]: 2025-12-13 04:08:40.427012132 +0000 UTC m=+0.027154069 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:08:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:08:40
Dec 13 04:08:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:08:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:08:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'volumes', '.mgr', 'images', 'vms', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.meta']
Dec 13 04:08:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:08:40 compute-0 podman[249730]: 2025-12-13 04:08:40.590476603 +0000 UTC m=+0.190618520 container create 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.609 243708 DEBUG nova.compute.manager [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-changed-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.610 243708 DEBUG nova.compute.manager [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Refreshing instance network info cache due to event network-changed-1cc0804c-1371-48e5-a964-354f99f7eace. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.610 243708 DEBUG oslo_concurrency.lockutils [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.610 243708 DEBUG oslo_concurrency.lockutils [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:08:40 compute-0 nova_compute[243704]: 2025-12-13 04:08:40.611 243708 DEBUG nova.network.neutron [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Refreshing network info cache for port 1cc0804c-1371-48e5-a964-354f99f7eace _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:08:40 compute-0 systemd[1]: Started libpod-conmon-442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f.scope.
Dec 13 04:08:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:08:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f908f5f281e119406a654c1f752cc85e1220b327500a28c04af8baaaf7c7272f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:40 compute-0 podman[249730]: 2025-12-13 04:08:40.713942723 +0000 UTC m=+0.314084670 container init 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:08:40 compute-0 podman[249730]: 2025-12-13 04:08:40.721493038 +0000 UTC m=+0.321634955 container start 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:08:40 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [NOTICE]   (249750) : New worker (249752) forked
Dec 13 04:08:40 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [NOTICE]   (249750) : Loading success.
Dec 13 04:08:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:40.790 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:08:41 compute-0 ceph-mon[75071]: osdmap e149: 3 total, 3 up, 3 in
Dec 13 04:08:41 compute-0 ceph-mon[75071]: osdmap e150: 3 total, 3 up, 3 in
Dec 13 04:08:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3096222134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3096222134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 46 KiB/s wr, 295 op/s
Dec 13 04:08:41 compute-0 nova_compute[243704]: 2025-12-13 04:08:41.461 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:41 compute-0 nova_compute[243704]: 2025-12-13 04:08:41.953 243708 DEBUG nova.network.neutron [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updated VIF entry in instance network info cache for port 1cc0804c-1371-48e5-a964-354f99f7eace. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:08:41 compute-0 nova_compute[243704]: 2025-12-13 04:08:41.953 243708 DEBUG nova.network.neutron [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating instance_info_cache with network_info: [{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:08:41 compute-0 nova_compute[243704]: 2025-12-13 04:08:41.987 243708 DEBUG oslo_concurrency.lockutils [req-a540b389-e1e0-4ff6-9e5b-c90793a83921 req-a37d1a28-d56e-4ec4-83eb-deda4aa32031 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:08:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3096222134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3096222134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:42 compute-0 ceph-mon[75071]: pgmap v890: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 46 KiB/s wr, 295 op/s
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:08:42 compute-0 nova_compute[243704]: 2025-12-13 04:08:42.570 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:08:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:08:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Dec 13 04:08:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Dec 13 04:08:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Dec 13 04:08:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 KiB/s wr, 146 op/s
Dec 13 04:08:43 compute-0 podman[249761]: 2025-12-13 04:08:43.961183014 +0000 UTC m=+0.107601350 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:08:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3254212761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3254212761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Dec 13 04:08:44 compute-0 ceph-mon[75071]: osdmap e151: 3 total, 3 up, 3 in
Dec 13 04:08:44 compute-0 ceph-mon[75071]: pgmap v892: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.2 KiB/s wr, 146 op/s
Dec 13 04:08:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3254212761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3254212761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Dec 13 04:08:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Dec 13 04:08:45 compute-0 ceph-mon[75071]: osdmap e152: 3 total, 3 up, 3 in
Dec 13 04:08:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 2.7 KiB/s wr, 156 op/s
Dec 13 04:08:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3889208811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3889208811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Dec 13 04:08:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Dec 13 04:08:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Dec 13 04:08:46 compute-0 ceph-mon[75071]: pgmap v894: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 121 KiB/s rd, 2.7 KiB/s wr, 156 op/s
Dec 13 04:08:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3889208811' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3889208811' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:46 compute-0 nova_compute[243704]: 2025-12-13 04:08:46.462 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718916310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2718916310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:47 compute-0 ceph-mon[75071]: osdmap e153: 3 total, 3 up, 3 in
Dec 13 04:08:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2718916310' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2718916310' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.3 KiB/s wr, 137 op/s
Dec 13 04:08:47 compute-0 nova_compute[243704]: 2025-12-13 04:08:47.573 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:48 compute-0 ceph-mon[75071]: pgmap v896: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 2.3 KiB/s wr, 137 op/s
Dec 13 04:08:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:08:48.793 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:08:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 4.9 KiB/s wr, 177 op/s
Dec 13 04:08:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:08:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3450878444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 04:08:50 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 04:08:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Dec 13 04:08:50 compute-0 ceph-mon[75071]: pgmap v897: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 135 KiB/s rd, 4.9 KiB/s wr, 177 op/s
Dec 13 04:08:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3450878444' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:08:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Dec 13 04:08:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Dec 13 04:08:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e154 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Dec 13 04:08:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Dec 13 04:08:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Dec 13 04:08:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Dec 13 04:08:51 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Dec 13 04:08:51 compute-0 nova_compute[243704]: 2025-12-13 04:08:51.508 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Dec 13 04:08:51 compute-0 ceph-mon[75071]: osdmap e154: 3 total, 3 up, 3 in
Dec 13 04:08:51 compute-0 ceph-mon[75071]: osdmap e155: 3 total, 3 up, 3 in
Dec 13 04:08:51 compute-0 ovn_controller[145204]: 2025-12-13T04:08:51Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:f9:75 10.100.0.6
Dec 13 04:08:52 compute-0 ovn_controller[145204]: 2025-12-13T04:08:52Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:f9:75 10.100.0.6
Dec 13 04:08:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Dec 13 04:08:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.000348906812936741 of space, bias 1.0, pg target 0.1046720438810223 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 5.235909220218628e-06 of space, bias 1.0, pg target 0.0015707727660655884 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.1555313435260335e-07 of space, bias 1.0, pg target 3.466594030578101e-05 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665893962686475 of space, bias 1.0, pg target 0.1997681888059425 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.465706819977822e-06 of space, bias 4.0, pg target 0.0017588481839733866 quantized to 16 (current 16)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:08:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:08:52 compute-0 nova_compute[243704]: 2025-12-13 04:08:52.611 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:52 compute-0 ceph-mon[75071]: pgmap v900: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Dec 13 04:08:52 compute-0 ceph-mon[75071]: osdmap e156: 3 total, 3 up, 3 in
Dec 13 04:08:53 compute-0 sudo[249791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:08:53 compute-0 sudo[249791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:53 compute-0 sudo[249791]: pam_unix(sudo:session): session closed for user root
Dec 13 04:08:53 compute-0 sudo[249816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:08:53 compute-0 sudo[249816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Dec 13 04:08:53 compute-0 sudo[249816]: pam_unix(sudo:session): session closed for user root
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:08:53 compute-0 sudo[249873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:08:53 compute-0 sudo[249873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:53 compute-0 sudo[249873]: pam_unix(sudo:session): session closed for user root
Dec 13 04:08:53 compute-0 sudo[249898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:08:53 compute-0 sudo[249898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Dec 13 04:08:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Dec 13 04:08:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Dec 13 04:08:53 compute-0 ceph-mon[75071]: pgmap v902: 305 pgs: 305 active+clean; 88 MiB data, 200 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.8 KiB/s wr, 44 op/s
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:08:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.218835701 +0000 UTC m=+0.021710552 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.325654459 +0000 UTC m=+0.128529300 container create 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:08:54 compute-0 systemd[1]: Started libpod-conmon-79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6.scope.
Dec 13 04:08:54 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.431293474 +0000 UTC m=+0.234168335 container init 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.441348628 +0000 UTC m=+0.244223459 container start 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:08:54 compute-0 stupefied_chandrasekhar[249950]: 167 167
Dec 13 04:08:54 compute-0 systemd[1]: libpod-79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6.scope: Deactivated successfully.
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.507075207 +0000 UTC m=+0.309950038 container attach 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.508384303 +0000 UTC m=+0.311259134 container died 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-0bf8debda9598874c694f72121ab6d84e4bbb8e4492975e0b10eb2efc89c0e09-merged.mount: Deactivated successfully.
Dec 13 04:08:54 compute-0 podman[249934]: 2025-12-13 04:08:54.655646011 +0000 UTC m=+0.458520872 container remove 79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 04:08:54 compute-0 systemd[1]: libpod-conmon-79c7ba112cb77f583afc17b5808b5b4d9cf2567314a03efbc688d7c0c68687b6.scope: Deactivated successfully.
Dec 13 04:08:54 compute-0 podman[249976]: 2025-12-13 04:08:54.824526028 +0000 UTC m=+0.027940471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:08:54 compute-0 podman[249976]: 2025-12-13 04:08:54.919329299 +0000 UTC m=+0.122743722 container create dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:08:54 compute-0 ceph-mon[75071]: osdmap e157: 3 total, 3 up, 3 in
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.937146) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598934937212, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2232, "num_deletes": 254, "total_data_size": 3587259, "memory_usage": 3656736, "flush_reason": "Manual Compaction"}
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598934959243, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3515364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16425, "largest_seqno": 18656, "table_properties": {"data_size": 3504971, "index_size": 6759, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 20910, "raw_average_key_size": 20, "raw_value_size": 3484216, "raw_average_value_size": 3415, "num_data_blocks": 300, "num_entries": 1020, "num_filter_entries": 1020, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598735, "oldest_key_time": 1765598735, "file_creation_time": 1765598934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 22312 microseconds, and 8874 cpu microseconds.
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.959455) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3515364 bytes OK
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.959515) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.961750) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.961768) EVENT_LOG_v1 {"time_micros": 1765598934961763, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.961787) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3577789, prev total WAL file size 3577789, number of live WAL files 2.
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.963101) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3432KB)], [38(7847KB)]
Dec 13 04:08:54 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598934963206, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11551448, "oldest_snapshot_seqno": -1}
Dec 13 04:08:54 compute-0 systemd[1]: Started libpod-conmon-dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706.scope.
Dec 13 04:08:55 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4561 keys, 9749068 bytes, temperature: kUnknown
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598935106255, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9749068, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9714281, "index_size": 22261, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11461, "raw_key_size": 110529, "raw_average_key_size": 24, "raw_value_size": 9627685, "raw_average_value_size": 2110, "num_data_blocks": 940, "num_entries": 4561, "num_filter_entries": 4561, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598934, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.106544) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9749068 bytes
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.153064) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.7 rd, 68.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 7.7 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5082, records dropped: 521 output_compression: NoCompression
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.153105) EVENT_LOG_v1 {"time_micros": 1765598935153090, "job": 18, "event": "compaction_finished", "compaction_time_micros": 143161, "compaction_time_cpu_micros": 24227, "output_level": 6, "num_output_files": 1, "total_output_size": 9749068, "num_input_records": 5082, "num_output_records": 4561, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598935153895, "job": 18, "event": "table_file_deletion", "file_number": 40}
Dec 13 04:08:55 compute-0 podman[249976]: 2025-12-13 04:08:55.155447656 +0000 UTC m=+0.358862099 container init dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598935155538, "job": 18, "event": "table_file_deletion", "file_number": 38}
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:54.962868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.155935) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.155944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.155947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.155950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:08:55.155953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:08:55 compute-0 podman[249976]: 2025-12-13 04:08:55.16220276 +0000 UTC m=+0.365617173 container start dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:08:55 compute-0 podman[249976]: 2025-12-13 04:08:55.176302433 +0000 UTC m=+0.379716856 container attach dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:08:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 843 KiB/s rd, 5.2 MiB/s wr, 210 op/s
Dec 13 04:08:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:08:55 compute-0 mystifying_austin[249992]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:08:55 compute-0 mystifying_austin[249992]: --> All data devices are unavailable
Dec 13 04:08:55 compute-0 podman[249976]: 2025-12-13 04:08:55.623059725 +0000 UTC m=+0.826474138 container died dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:08:55 compute-0 systemd[1]: libpod-dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706.scope: Deactivated successfully.
Dec 13 04:08:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Dec 13 04:08:56 compute-0 ceph-mon[75071]: pgmap v904: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 843 KiB/s rd, 5.2 MiB/s wr, 210 op/s
Dec 13 04:08:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Dec 13 04:08:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Dec 13 04:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-b72aa5ac373ab0fb69d6ace180c0e80abcdec178046aa1a5a7907acb4a9d13a3-merged.mount: Deactivated successfully.
Dec 13 04:08:56 compute-0 podman[249976]: 2025-12-13 04:08:56.302510339 +0000 UTC m=+1.505924752 container remove dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:08:56 compute-0 sudo[249898]: pam_unix(sudo:session): session closed for user root
Dec 13 04:08:56 compute-0 podman[250024]: 2025-12-13 04:08:56.36495989 +0000 UTC m=+0.074895630 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:08:56 compute-0 systemd[1]: libpod-conmon-dd224ba907cce74d413045f75ad6f42114cef327488ea195099a7ad12bb7a706.scope: Deactivated successfully.
Dec 13 04:08:56 compute-0 sudo[250042]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:08:56 compute-0 sudo[250042]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:56 compute-0 sudo[250042]: pam_unix(sudo:session): session closed for user root
Dec 13 04:08:56 compute-0 sudo[250067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:08:56 compute-0 sudo[250067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:08:56 compute-0 nova_compute[243704]: 2025-12-13 04:08:56.509 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:56 compute-0 podman[250104]: 2025-12-13 04:08:56.830270196 +0000 UTC m=+0.108034793 container create a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:08:56 compute-0 podman[250104]: 2025-12-13 04:08:56.747571754 +0000 UTC m=+0.025336411 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:08:56 compute-0 systemd[1]: Started libpod-conmon-a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0.scope.
Dec 13 04:08:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:08:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 686 KiB/s rd, 4.3 MiB/s wr, 171 op/s
Dec 13 04:08:57 compute-0 ceph-mon[75071]: osdmap e158: 3 total, 3 up, 3 in
Dec 13 04:08:57 compute-0 podman[250104]: 2025-12-13 04:08:57.608657504 +0000 UTC m=+0.886422201 container init a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:08:57 compute-0 nova_compute[243704]: 2025-12-13 04:08:57.657 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:08:57 compute-0 podman[250104]: 2025-12-13 04:08:57.660919936 +0000 UTC m=+0.938684533 container start a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:08:57 compute-0 admiring_varahamihira[250120]: 167 167
Dec 13 04:08:57 compute-0 systemd[1]: libpod-a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0.scope: Deactivated successfully.
Dec 13 04:08:57 compute-0 podman[250104]: 2025-12-13 04:08:57.934627606 +0000 UTC m=+1.212392223 container attach a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Dec 13 04:08:57 compute-0 podman[250104]: 2025-12-13 04:08:57.935003226 +0000 UTC m=+1.212767823 container died a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:08:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83e2b0faefe48aa2702daed35ebe10cbe4e8071787ab567dad30aa0d47a1841-merged.mount: Deactivated successfully.
Dec 13 04:08:58 compute-0 ceph-mon[75071]: pgmap v906: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 686 KiB/s rd, 4.3 MiB/s wr, 171 op/s
Dec 13 04:08:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 612 KiB/s rd, 3.6 MiB/s wr, 191 op/s
Dec 13 04:08:59 compute-0 podman[250104]: 2025-12-13 04:08:59.566476326 +0000 UTC m=+2.844240973 container remove a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:08:59 compute-0 systemd[1]: libpod-conmon-a8466492343fc4df53535d436f554735c6d89df19d9b2d38f0b334f6760176b0.scope: Deactivated successfully.
Dec 13 04:08:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:08:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4246989391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:08:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4246989391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:59 compute-0 podman[250144]: 2025-12-13 04:08:59.775822395 +0000 UTC m=+0.039713282 container create 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:08:59 compute-0 systemd[1]: Started libpod-conmon-114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6.scope.
Dec 13 04:08:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f35ca31db047dbf3fcf6bf29c37d9e4a2dd9e8d059d0412acc93e1d70ef6860/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f35ca31db047dbf3fcf6bf29c37d9e4a2dd9e8d059d0412acc93e1d70ef6860/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f35ca31db047dbf3fcf6bf29c37d9e4a2dd9e8d059d0412acc93e1d70ef6860/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f35ca31db047dbf3fcf6bf29c37d9e4a2dd9e8d059d0412acc93e1d70ef6860/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:08:59 compute-0 podman[250144]: 2025-12-13 04:08:59.759418878 +0000 UTC m=+0.023309785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:08:59 compute-0 podman[250144]: 2025-12-13 04:08:59.854839245 +0000 UTC m=+0.118730152 container init 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:08:59 compute-0 ceph-mon[75071]: pgmap v907: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 612 KiB/s rd, 3.6 MiB/s wr, 191 op/s
Dec 13 04:08:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4246989391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:08:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4246989391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:08:59 compute-0 podman[250144]: 2025-12-13 04:08:59.861781195 +0000 UTC m=+0.125672082 container start 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:08:59 compute-0 podman[250144]: 2025-12-13 04:08:59.866887944 +0000 UTC m=+0.130778831 container attach 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:00 compute-0 happy_brown[250160]: {
Dec 13 04:09:00 compute-0 happy_brown[250160]:     "0": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:         {
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "devices": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "/dev/loop3"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             ],
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_name": "ceph_lv0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_size": "21470642176",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "name": "ceph_lv0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "tags": {
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_name": "ceph",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.crush_device_class": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.encrypted": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.objectstore": "bluestore",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_id": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.vdo": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.with_tpm": "0"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             },
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "vg_name": "ceph_vg0"
Dec 13 04:09:00 compute-0 happy_brown[250160]:         }
Dec 13 04:09:00 compute-0 happy_brown[250160]:     ],
Dec 13 04:09:00 compute-0 happy_brown[250160]:     "1": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:         {
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "devices": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "/dev/loop4"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             ],
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_name": "ceph_lv1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_size": "21470642176",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "name": "ceph_lv1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "tags": {
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_name": "ceph",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.crush_device_class": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.encrypted": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.objectstore": "bluestore",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_id": "1",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.vdo": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.with_tpm": "0"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             },
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "vg_name": "ceph_vg1"
Dec 13 04:09:00 compute-0 happy_brown[250160]:         }
Dec 13 04:09:00 compute-0 happy_brown[250160]:     ],
Dec 13 04:09:00 compute-0 happy_brown[250160]:     "2": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:         {
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "devices": [
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "/dev/loop5"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             ],
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_name": "ceph_lv2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_size": "21470642176",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "name": "ceph_lv2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "tags": {
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.cluster_name": "ceph",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.crush_device_class": "",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.encrypted": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.objectstore": "bluestore",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osd_id": "2",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.vdo": "0",
Dec 13 04:09:00 compute-0 happy_brown[250160]:                 "ceph.with_tpm": "0"
Dec 13 04:09:00 compute-0 happy_brown[250160]:             },
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "type": "block",
Dec 13 04:09:00 compute-0 happy_brown[250160]:             "vg_name": "ceph_vg2"
Dec 13 04:09:00 compute-0 happy_brown[250160]:         }
Dec 13 04:09:00 compute-0 happy_brown[250160]:     ]
Dec 13 04:09:00 compute-0 happy_brown[250160]: }
Dec 13 04:09:00 compute-0 systemd[1]: libpod-114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6.scope: Deactivated successfully.
Dec 13 04:09:00 compute-0 podman[250144]: 2025-12-13 04:09:00.159834718 +0000 UTC m=+0.423725605 container died 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5f35ca31db047dbf3fcf6bf29c37d9e4a2dd9e8d059d0412acc93e1d70ef6860-merged.mount: Deactivated successfully.
Dec 13 04:09:00 compute-0 podman[250144]: 2025-12-13 04:09:00.469485387 +0000 UTC m=+0.733376274 container remove 114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:09:00 compute-0 systemd[1]: libpod-conmon-114ee1a9c9cc18eef8db79257aa980ee26431cca153eba090d6817a31bf8acc6.scope: Deactivated successfully.
Dec 13 04:09:00 compute-0 sudo[250067]: pam_unix(sudo:session): session closed for user root
Dec 13 04:09:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Dec 13 04:09:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Dec 13 04:09:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Dec 13 04:09:00 compute-0 sudo[250180]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:09:00 compute-0 sudo[250180]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:09:00 compute-0 sudo[250180]: pam_unix(sudo:session): session closed for user root
Dec 13 04:09:00 compute-0 sudo[250205]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:09:00 compute-0 sudo[250205]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.880481444 +0000 UTC m=+0.039196448 container create a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:09:00 compute-0 systemd[1]: Started libpod-conmon-a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a.scope.
Dec 13 04:09:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.947612032 +0000 UTC m=+0.106326936 container init a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.954784407 +0000 UTC m=+0.113499311 container start a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.958153429 +0000 UTC m=+0.116868323 container attach a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.862360221 +0000 UTC m=+0.021075145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:00 compute-0 flamboyant_chaum[250260]: 167 167
Dec 13 04:09:00 compute-0 systemd[1]: libpod-a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a.scope: Deactivated successfully.
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.960847412 +0000 UTC m=+0.119562316 container died a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:09:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-a5eab577331dab4bb9b177283fdc126b6164d1e40d59adc114c7763c115d117d-merged.mount: Deactivated successfully.
Dec 13 04:09:00 compute-0 podman[250244]: 2025-12-13 04:09:00.999356081 +0000 UTC m=+0.158070985 container remove a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_chaum, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:09:01 compute-0 systemd[1]: libpod-conmon-a2f2403d754ed061b9434120a71c2c8a725af862949b8db0519364ab27a4553a.scope: Deactivated successfully.
Dec 13 04:09:01 compute-0 podman[250285]: 2025-12-13 04:09:01.151920043 +0000 UTC m=+0.035722083 container create 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:09:01 compute-0 systemd[1]: Started libpod-conmon-7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89.scope.
Dec 13 04:09:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485fd50803a9305860537ef8fcc3c59e595ce0a7a6be200a13b1717ab29ecbc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485fd50803a9305860537ef8fcc3c59e595ce0a7a6be200a13b1717ab29ecbc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485fd50803a9305860537ef8fcc3c59e595ce0a7a6be200a13b1717ab29ecbc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2485fd50803a9305860537ef8fcc3c59e595ce0a7a6be200a13b1717ab29ecbc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:09:01 compute-0 podman[250285]: 2025-12-13 04:09:01.137299285 +0000 UTC m=+0.021101345 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:09:01 compute-0 podman[250285]: 2025-12-13 04:09:01.245948133 +0000 UTC m=+0.129750193 container init 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:09:01 compute-0 podman[250285]: 2025-12-13 04:09:01.251478664 +0000 UTC m=+0.135280704 container start 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True)
Dec 13 04:09:01 compute-0 podman[250285]: 2025-12-13 04:09:01.254798174 +0000 UTC m=+0.138600234 container attach 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:09:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 1.7 MiB/s wr, 73 op/s
Dec 13 04:09:01 compute-0 nova_compute[243704]: 2025-12-13 04:09:01.511 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:01 compute-0 ceph-mon[75071]: osdmap e159: 3 total, 3 up, 3 in
Dec 13 04:09:01 compute-0 lvm[250378]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:09:01 compute-0 lvm[250378]: VG ceph_vg0 finished
Dec 13 04:09:01 compute-0 lvm[250381]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:09:01 compute-0 lvm[250381]: VG ceph_vg1 finished
Dec 13 04:09:02 compute-0 lvm[250383]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:09:02 compute-0 lvm[250383]: VG ceph_vg2 finished
Dec 13 04:09:02 compute-0 adoring_johnson[250302]: {}
Dec 13 04:09:02 compute-0 systemd[1]: libpod-7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89.scope: Deactivated successfully.
Dec 13 04:09:02 compute-0 systemd[1]: libpod-7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89.scope: Consumed 1.358s CPU time.
Dec 13 04:09:02 compute-0 podman[250285]: 2025-12-13 04:09:02.124882908 +0000 UTC m=+1.008684968 container died 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:09:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-2485fd50803a9305860537ef8fcc3c59e595ce0a7a6be200a13b1717ab29ecbc-merged.mount: Deactivated successfully.
Dec 13 04:09:02 compute-0 podman[250285]: 2025-12-13 04:09:02.199839508 +0000 UTC m=+1.083641538 container remove 7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_johnson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:09:02 compute-0 systemd[1]: libpod-conmon-7f9d002d72537ee3a5eea2efd906641fe38bf0709857ec01337e9a27f2b0db89.scope: Deactivated successfully.
Dec 13 04:09:02 compute-0 sudo[250205]: pam_unix(sudo:session): session closed for user root
Dec 13 04:09:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:09:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:09:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:09:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:09:02 compute-0 sudo[250397]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:09:02 compute-0 sudo[250397]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:09:02 compute-0 sudo[250397]: pam_unix(sudo:session): session closed for user root
Dec 13 04:09:02 compute-0 ceph-mon[75071]: pgmap v909: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 145 KiB/s rd, 1.7 MiB/s wr, 73 op/s
Dec 13 04:09:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:09:02 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:09:02 compute-0 nova_compute[243704]: 2025-12-13 04:09:02.660 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 20 KiB/s wr, 45 op/s
Dec 13 04:09:04 compute-0 ceph-mon[75071]: pgmap v910: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 20 KiB/s wr, 45 op/s
Dec 13 04:09:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 18 KiB/s wr, 41 op/s
Dec 13 04:09:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4075745370' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4075745370' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e159 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4075745370' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4075745370' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:05 compute-0 podman[250422]: 2025-12-13 04:09:05.905849749 +0000 UTC m=+0.053137738 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:09:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4264484692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4264484692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:06 compute-0 nova_compute[243704]: 2025-12-13 04:09:06.514 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Dec 13 04:09:06 compute-0 ceph-mon[75071]: pgmap v911: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 18 KiB/s wr, 41 op/s
Dec 13 04:09:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4264484692' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4264484692' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Dec 13 04:09:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Dec 13 04:09:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 127 B/s wr, 4 op/s
Dec 13 04:09:07 compute-0 ceph-mon[75071]: osdmap e160: 3 total, 3 up, 3 in
Dec 13 04:09:07 compute-0 nova_compute[243704]: 2025-12-13 04:09:07.663 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:07 compute-0 nova_compute[243704]: 2025-12-13 04:09:07.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:07 compute-0 nova_compute[243704]: 2025-12-13 04:09:07.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:09:07 compute-0 nova_compute[243704]: 2025-12-13 04:09:07.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:09:08 compute-0 nova_compute[243704]: 2025-12-13 04:09:08.355 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:09:08 compute-0 nova_compute[243704]: 2025-12-13 04:09:08.356 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:09:08 compute-0 nova_compute[243704]: 2025-12-13 04:09:08.356 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:09:08 compute-0 nova_compute[243704]: 2025-12-13 04:09:08.356 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:09:08 compute-0 ceph-mon[75071]: pgmap v913: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 127 B/s wr, 4 op/s
Dec 13 04:09:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 13 04:09:09 compute-0 ceph-mon[75071]: pgmap v914: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 KiB/s wr, 30 op/s
Dec 13 04:09:10 compute-0 ovn_controller[145204]: 2025-12-13T04:09:10Z|00032|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.507 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating instance_info_cache with network_info: [{"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.524 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-143ff7a5-b045-4330-945a-cab9a1074156" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.525 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.525 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.526 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.526 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.526 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.527 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.527 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.527 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Dec 13 04:09:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Dec 13 04:09:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.556 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.557 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.557 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.557 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:09:10 compute-0 nova_compute[243704]: 2025-12-13 04:09:10.558 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/181683796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.161 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.235 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.236 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000001 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:09:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.397 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.398 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4618MB free_disk=59.94271211512387GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.398 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.399 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.476 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 143ff7a5-b045-4330-945a-cab9a1074156 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.476 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.476 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.515 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.535 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Dec 13 04:09:11 compute-0 ceph-mon[75071]: osdmap e161: 3 total, 3 up, 3 in
Dec 13 04:09:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/181683796' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.606 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.607 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.626 243708 DEBUG nova.objects.instance [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'flavor' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.661 243708 INFO nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Ignoring supplied device name: /dev/vdb
Dec 13 04:09:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.703 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.920 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.920 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:11 compute-0 nova_compute[243704]: 2025-12-13 04:09:11.921 243708 INFO nova.compute.manager [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Attaching volume eaf4ef51-7928-40d8-8afc-17d213082723 to /dev/vdb
Dec 13 04:09:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3143863979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.110 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.116 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.141 243708 ERROR nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [req-9a07fef1-f3ef-471d-b1ca-094ebf882e6b] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 36c11063-1199-4cbe-b01b-7185aae56a2a.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-9a07fef1-f3ef-471d-b1ca-094ebf882e6b"}]}
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.160 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.163 243708 DEBUG os_brick.utils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.165 243708 INFO oslo.privsep.daemon [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'os_brick.privileged.default', '--privsep_sock_path', '/tmp/tmplemu9_h4/privsep.sock']
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.188 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.189 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.208 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.226 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.265 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:12 compute-0 ceph-mon[75071]: pgmap v916: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Dec 13 04:09:12 compute-0 ceph-mon[75071]: osdmap e162: 3 total, 3 up, 3 in
Dec 13 04:09:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3143863979' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.666 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2087613302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.812 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.818 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.855 243708 INFO oslo.privsep.daemon [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Spawned new privsep daemon via rootwrap
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.721 250512 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.725 250512 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.727 250512 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.727 250512 INFO oslo.privsep.daemon [-] privsep daemon running as pid 250512
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.858 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[c1eff31a-7a51-4f20-b27a-8d0b99593ff2]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.889 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updated inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.889 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.889 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.958 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.962 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.962 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.564s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.969 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.969 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[f05862d8-db92-46d0-9392-ec7f858689b3]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.971 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.981 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.982 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[d429ef16-4647-4768-aa3c-d483a4542f85]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.984 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.994 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.994 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[5283125b-58b8-4c6b-881e-fd423e3d4372]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.996 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cde4d569-e908-4d59-8322-5dc775441522]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:12 compute-0 nova_compute[243704]: 2025-12-13 04:09:12.997 243708 DEBUG oslo_concurrency.processutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.015 243708 DEBUG oslo_concurrency.processutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.018 243708 DEBUG os_brick.initiator.connectors.lightos [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.019 243708 DEBUG os_brick.initiator.connectors.lightos [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.020 243708 DEBUG os_brick.initiator.connectors.lightos [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.020 243708 DEBUG os_brick.utils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] <== get_connector_properties: return (855ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.020 243708 DEBUG nova.virt.block_device [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating existing volume attachment record: 1025e2b4-29d3-4997-a2ac-b3b573466d95 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:09:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 39 op/s
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.314 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:13 compute-0 nova_compute[243704]: 2025-12-13 04:09:13.315 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:09:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3097672790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3097672790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:09:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3214212669' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:09:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2087613302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3097672790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3097672790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.829 243708 DEBUG os_brick.encryptors [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Using volume encryption metadata '{'encryption_key_id': '9a1a842c-948c-4f8d-8f4a-bbafc251168f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '143ff7a5-b045-4330-945a-cab9a1074156', 'attached_at': '', 'detached_at': '', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.831 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "cache_volume_driver" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.832 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "cache_volume_driver" acquired by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.832 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "cache_volume_driver" "released" by "nova.virt.libvirt.driver.LibvirtDriver._get_volume_driver.<locals>._cache_volume_driver" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.839 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.863 243708 DEBUG barbicanclient.v1.secrets [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.864 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.896 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.897 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.920 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.921 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:14 compute-0 podman[250523]: 2025-12-13 04:09:14.938371069 +0000 UTC m=+0.081597532 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.943 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.944 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.966 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.966 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.990 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:14 compute-0 nova_compute[243704]: 2025-12-13 04:09:14.990 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.011 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.012 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.031 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.032 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.052 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.053 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 ceph-mon[75071]: pgmap v918: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 39 op/s
Dec 13 04:09:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3214212669' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.072 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.073 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.102 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.102 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.136 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.136 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.4 KiB/s wr, 67 op/s
Dec 13 04:09:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.953 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.954 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.976 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:15 compute-0 nova_compute[243704]: 2025-12-13 04:09:15.977 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.034 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.035 243708 INFO barbicanclient.base [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Calculated Secrets uuid ref: secrets/9a1a842c-948c-4f8d-8f4a-bbafc251168f
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.057 243708 DEBUG barbicanclient.client [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.057 243708 DEBUG nova.virt.libvirt.host [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:09:16 compute-0 nova_compute[243704]:     <volume>eaf4ef51-7928-40d8-8afc-17d213082723</volume>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:09:16 compute-0 nova_compute[243704]: </secret>
Dec 13 04:09:16 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.070 243708 DEBUG nova.objects.instance [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'flavor' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:09:16 compute-0 ceph-mon[75071]: pgmap v919: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 4.4 KiB/s wr, 67 op/s
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.094 243708 DEBUG nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Attempting to attach volume eaf4ef51-7928-40d8-8afc-17d213082723 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.096 243708 DEBUG nova.virt.libvirt.guest [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723">
Dec 13 04:09:16 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   </source>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:09:16 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <serial>eaf4ef51-7928-40d8-8afc-17d213082723</serial>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:09:16 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="e3e9a3c6-2778-4ae7-ae6c-0a1504dadf10"/>
Dec 13 04:09:16 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:09:16 compute-0 nova_compute[243704]: </disk>
Dec 13 04:09:16 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:09:16 compute-0 nova_compute[243704]: 2025-12-13 04:09:16.518 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 34 op/s
Dec 13 04:09:17 compute-0 nova_compute[243704]: 2025-12-13 04:09:17.670 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:18 compute-0 ceph-mon[75071]: pgmap v920: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.6 KiB/s wr, 34 op/s
Dec 13 04:09:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4114725645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4114725645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:18 compute-0 nova_compute[243704]: 2025-12-13 04:09:18.717 243708 DEBUG nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:09:18 compute-0 nova_compute[243704]: 2025-12-13 04:09:18.717 243708 DEBUG nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:09:18 compute-0 nova_compute[243704]: 2025-12-13 04:09:18.718 243708 DEBUG nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:09:18 compute-0 nova_compute[243704]: 2025-12-13 04:09:18.718 243708 DEBUG nova.virt.libvirt.driver [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] No VIF found with MAC fa:16:3e:32:f9:75, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:09:19 compute-0 nova_compute[243704]: 2025-12-13 04:09:19.021 243708 DEBUG oslo_concurrency.lockutils [None req-86a50bbb-6f14-42ea-a8fc-e5d149573d2c 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 7.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2294331062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2294331062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.8 KiB/s wr, 36 op/s
Dec 13 04:09:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4114725645' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4114725645' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2294331062' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2294331062' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:20 compute-0 ceph-mon[75071]: pgmap v921: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.8 KiB/s wr, 36 op/s
Dec 13 04:09:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Dec 13 04:09:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Dec 13 04:09:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Dec 13 04:09:20 compute-0 nova_compute[243704]: 2025-12-13 04:09:20.782 243708 DEBUG nova.compute.manager [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event volume-extended-eaf4ef51-7928-40d8-8afc-17d213082723 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:09:20 compute-0 nova_compute[243704]: 2025-12-13 04:09:20.797 243708 DEBUG nova.compute.manager [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Handling volume-extended event for volume eaf4ef51-7928-40d8-8afc-17d213082723 extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Dec 13 04:09:20 compute-0 nova_compute[243704]: 2025-12-13 04:09:20.814 243708 INFO nova.compute.manager [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Cinder extended volume eaf4ef51-7928-40d8-8afc-17d213082723; extending it to detect new size
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.126 243708 DEBUG os_brick.encryptors [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] Using volume encryption metadata '{'encryption_key_id': '9a1a842c-948c-4f8d-8f4a-bbafc251168f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '143ff7a5-b045-4330-945a-cab9a1074156', 'attached_at': '', 'detached_at': '', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.127 243708 INFO oslo.privsep.daemon [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpcefi4f_u/privsep.sock']
Dec 13 04:09:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.519 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.793 243708 INFO oslo.privsep.daemon [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] Spawned new privsep daemon via rootwrap
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.663 250574 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.668 250574 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.671 250574 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec 13 04:09:21 compute-0 nova_compute[243704]: 2025-12-13 04:09:21.671 250574 INFO oslo.privsep.daemon [-] privsep daemon running as pid 250574
Dec 13 04:09:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Dec 13 04:09:21 compute-0 ceph-mon[75071]: osdmap e163: 3 total, 3 up, 3 in
Dec 13 04:09:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Dec 13 04:09:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Dec 13 04:09:22 compute-0 systemd[1]: Created slice Slice /system/systemd-coredump.
Dec 13 04:09:22 compute-0 systemd[1]: Started Process Core Dump (PID 250595/UID 0).
Dec 13 04:09:22 compute-0 nova_compute[243704]: 2025-12-13 04:09:22.673 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:22 compute-0 ceph-mon[75071]: pgmap v923: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.7 KiB/s wr, 45 op/s
Dec 13 04:09:22 compute-0 ceph-mon[75071]: osdmap e164: 3 total, 3 up, 3 in
Dec 13 04:09:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 639 B/s wr, 20 op/s
Dec 13 04:09:23 compute-0 systemd-coredump[250596]: Process 250576 (qemu-img) of user 0 dumped core.
                                                    
                                                    Stack trace of thread 250586:
                                                    #0  0x00007f0dd665b03c __pthread_kill_implementation (libc.so.6 + 0x8d03c)
                                                    #1  0x00007f0dd660db86 raise (libc.so.6 + 0x3fb86)
                                                    #2  0x00007f0dd65f7873 abort (libc.so.6 + 0x29873)
                                                    #3  0x00007f0dcf744f02 n/a (/usr/lib64/ceph/libceph-common.so.2 + 0x16df02)
                                                    #4  0x00000000000003a7 n/a (n/a + 0x0)
                                                    ELF object binary architecture: AMD x86-64
Dec 13 04:09:23 compute-0 systemd[1]: systemd-coredump@0-250595-0.service: Deactivated successfully.
Dec 13 04:09:23 compute-0 systemd[1]: systemd-coredump@0-250595-0.service: Consumed 1.485s CPU time.
Dec 13 04:09:23 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Unknown error when attempting to find the payload_offset for LUKSv1 encrypted disk rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack.: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack --force-share --output=json
Dec 13 04:09:23 compute-0 nova_compute[243704]: Exit code: -6
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stdout: ''
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stderr: "Thread::try_create(): pthread_create failed with error 11/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: In function 'void Thread::create(const char*, size_t)' thread 7f0dbf7fe640 time 2025-12-13T04:09:22.066131+0000\n/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: 165: FAILED ceph_assert(ret == 0)\n ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\n 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x7f0dcf744ea8]\n 2: /usr/lib64/ceph/libceph-common.so.2(+0x16e067) [0x7f0dcf745067]\n 3: (Thread::create(char const*, unsigned long)+0xbc) [0x7f0dcf83270c]\n 4: /lib64/librbd.so.1(+0x51126b) [0x7f0dd473b26b]\n 5: /lib64/librbd.so.1(+0x13e7a6) [0x7f0dd43687a6]\n 6: /lib64/librbd.so.1(+0x2182d3) [0x7f0dd44422d3]\n 7: /lib64/librbd.so.1(+0x218f46) [0x7f0dd4442f46]\n 8: /lib64/librbd.so.1(+0x2192a7) [0x7f0dd44432a7]\n 9: /lib64/librados.so.2(+0xad0ac) [0x7f0dd41410ac]\n 10: /lib64/librados.so.2(+0xac585) [0x7f0dd4140585]\n 11: /lib64/librados.so.2(+0x127498) [0x7f0dd41bb498]\n 12: /lib64/librados.so.2(+0xc64e4) [0x7f0dd415a4e4]\n 13: /lib64/libstdc++.so.6(+0xdbae4) [0x7f0dceec6ae4]\n 14: /lib64/libc.so.6(+0x8b2fa) [0x7f0dd66592fa]\n 15: /lib64/libc.so.6(+0x110400) [0x7f0dd66de400]\n"
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Traceback (most recent call last):
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]     info = images.privileged_qemu_img_info(path)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]     return self.channel.remote_call(name, args, kwargs,
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156]     raise exc_type(*result[2])
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack --force-share --output=json
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Exit code: -6
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Stdout: ''
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Stderr: "Thread::try_create(): pthread_create failed with error 11/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: In function 'void Thread::create(const char*, size_t)' thread 7f0dbf7fe640 time 2025-12-13T04:09:22.066131+0000\n/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: 165: FAILED ceph_assert(ret == 0)\n ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\n 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x7f0dcf744ea8]\n 2: /usr/lib64/ceph/libceph-common.so.2(+0x16e067) [0x7f0dcf745067]\n 3: (Thread::create(char const*, unsigned long)+0xbc) [0x7f0dcf83270c]\n 4: /lib64/librbd.so.1(+0x51126b) [0x7f0dd473b26b]\n 5: /lib64/librbd.so.1(+0x13e7a6) [0x7f0dd43687a6]\n 6: /lib64/librbd.so.1(+0x2182d3) [0x7f0dd44422d3]\n 7: /lib64/librbd.so.1(+0x218f46) [0x7f0dd4442f46]\n 8: /lib64/librbd.so.1(+0x2192a7) [0x7f0dd44432a7]\n 9: /lib64/librados.so.2(+0xad0ac) [0x7f0dd41410ac]\n 10: /lib64/librados.so.2(+0xac585) [0x7f0dd4140585]\n 11: /lib64/librados.so.2(+0x127498) [0x7f0dd41bb498]\n 12: /lib64/librados.so.2(+0xc64e4) [0x7f0dd415a4e4]\n 13: /lib64/libstdc++.so.6(+0xdbae4) [0x7f0dceec6ae4]\n 14: /lib64/libc.so.6(+0x8b2fa) [0x7f0dd66592fa]\n 15: /lib64/libc.so.6(+0x110400) [0x7f0dd66de400]\n"
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.667 243708 ERROR nova.virt.libvirt.driver [instance: 143ff7a5-b045-4330-945a-cab9a1074156] 
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.672 243708 WARNING nova.compute.manager [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Extend volume failed, volume_id=eaf4ef51-7928-40d8-8afc-17d213082723, reason: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack --force-share --output=json
Dec 13 04:09:23 compute-0 nova_compute[243704]: Exit code: -6
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stdout: ''
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stderr: "Thread::try_create(): pthread_create failed with error 11/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: In function 'void Thread::create(const char*, size_t)' thread 7f0dbf7fe640 time 2025-12-13T04:09:22.066131+0000\n/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: 165: FAILED ceph_assert(ret == 0)\n ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\n 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x7f0dcf744ea8]\n 2: /usr/lib64/ceph/libceph-common.so.2(+0x16e067) [0x7f0dcf745067]\n 3: (Thread::create(char const*, unsigned long)+0xbc) [0x7f0dcf83270c]\n 4: /lib64/librbd.so.1(+0x51126b) [0x7f0dd473b26b]\n 5: /lib64/librbd.so.1(+0x13e7a6) [0x7f0dd43687a6]\n 6: /lib64/librbd.so.1(+0x2182d3) [0x7f0dd44422d3]\n 7: /lib64/librbd.so.1(+0x218f46) [0x7f0dd4442f46]\n 8: /lib64/librbd.so.1(+0x2192a7) [0x7f0dd44432a7]\n 9: /lib64/librados.so.2(+0xad0ac) [0x7f0dd41410ac]\n 10: /lib64/librados.so.2(+0xac585) [0x7f0dd4140585]\n 11: /lib64/librados.so.2(+0x127498) [0x7f0dd41bb498]\n 12: /lib64/librados.so.2(+0xc64e4) [0x7f0dd415a4e4]\n 13: /lib64/libstdc++.so.6(+0xdbae4) [0x7f0dceec6ae4]\n 14: /lib64/libc.so.6(+0x8b2fa) [0x7f0dd66592fa]\n 15: /lib64/libc.so.6(+0x110400) [0x7f0dd66de400]\n": nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server [req-b5179672-a60b-4d11-b1bb-9f3b9a7fabfc req-aa7fda98-9024-48c3-b522-ecd6a6b0c776 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] Exception during message handling: nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack --force-share --output=json
Dec 13 04:09:23 compute-0 nova_compute[243704]: Exit code: -6
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stdout: ''
Dec 13 04:09:23 compute-0 nova_compute[243704]: Stderr: "Thread::try_create(): pthread_create failed with error 11/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: In function 'void Thread::create(const char*, size_t)' thread 7f0dbf7fe640 time 2025-12-13T04:09:22.066131+0000\n/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: 165: FAILED ceph_assert(ret == 0)\n ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\n 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x7f0dcf744ea8]\n 2: /usr/lib64/ceph/libceph-common.so.2(+0x16e067) [0x7f0dcf745067]\n 3: (Thread::create(char const*, unsigned long)+0xbc) [0x7f0dcf83270c]\n 4: /lib64/librbd.so.1(+0x51126b) [0x7f0dd473b26b]\n 5: /lib64/librbd.so.1(+0x13e7a6) [0x7f0dd43687a6]\n 6: /lib64/librbd.so.1(+0x2182d3) [0x7f0dd44422d3]\n 7: /lib64/librbd.so.1(+0x218f46) [0x7f0dd4442f46]\n 8: /lib64/librbd.so.1(+0x2192a7) [0x7f0dd44432a7]\n 9: /lib64/librados.so.2(+0xad0ac) [0x7f0dd41410ac]\n 10: /lib64/librados.so.2(+0xac585) [0x7f0dd4140585]\n 11: /lib64/librados.so.2(+0x127498) [0x7f0dd41bb498]\n 12: /lib64/librados.so.2(+0xc64e4) [0x7f0dd415a4e4]\n 13: /lib64/libstdc++.so.6(+0xdbae4) [0x7f0dceec6ae4]\n 14: /lib64/libc.so.6(+0x8b2fa) [0x7f0dd66592fa]\n 15: /lib64/libc.so.6(+0x110400) [0x7f0dd66de400]\n"
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     raise self.value
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11073, in external_instance_event
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self.extend_volume(context, instance, event.tag)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 1439, in decorated_function
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     raise self.value
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10930, in extend_volume
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self.driver.extend_volume(context, connection_info, instance,
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2865, in extend_volume
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self._resize_attached_encrypted_volume(
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2804, in _resize_attached_encrypted_volume
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     LOG.exception('Unknown error when attempting to find the '
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     self.force_reraise()
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     raise self.value
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 2788, in _resize_attached_encrypted_volume
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     info = images.privileged_qemu_img_info(path)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/virt/images.py", line 57, in privileged_qemu_img_info
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     info = nova.privsep.qemu.privileged_qemu_img_info(path, format=format)
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/priv_context.py", line 271, in _wrap
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     return self.channel.remote_call(name, args, kwargs,
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_privsep/daemon.py", line 215, in remote_call
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server     raise exc_type(*result[2])
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server nova.exception.InvalidDiskInfo: Disk info file is invalid: qemu-img failed to execute on rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack : Unexpected error while running command.
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server Command: /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info rbd:volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723:id=openstack --force-share --output=json
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server Exit code: -6
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server Stdout: ''
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server Stderr: "Thread::try_create(): pthread_create failed with error 11/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: In function 'void Thread::create(const char*, size_t)' thread 7f0dbf7fe640 time 2025-12-13T04:09:22.066131+0000\n/builddir/build/BUILD/ceph-18.2.7/src/common/Thread.cc: 165: FAILED ceph_assert(ret == 0)\n ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)\n 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x11e) [0x7f0dcf744ea8]\n 2: /usr/lib64/ceph/libceph-common.so.2(+0x16e067) [0x7f0dcf745067]\n 3: (Thread::create(char const*, unsigned long)+0xbc) [0x7f0dcf83270c]\n 4: /lib64/librbd.so.1(+0x51126b) [0x7f0dd473b26b]\n 5: /lib64/librbd.so.1(+0x13e7a6) [0x7f0dd43687a6]\n 6: /lib64/librbd.so.1(+0x2182d3) [0x7f0dd44422d3]\n 7: /lib64/librbd.so.1(+0x218f46) [0x7f0dd4442f46]\n 8: /lib64/librbd.so.1(+0x2192a7) [0x7f0dd44432a7]\n 9: /lib64/librados.so.2(+0xad0ac) [0x7f0dd41410ac]\n 10: /lib64/librados.so.2(+0xac585) [0x7f0dd4140585]\n 11: /lib64/librados.so.2(+0x127498) [0x7f0dd41bb498]\n 12: /lib64/librados.so.2(+0xc64e4) [0x7f0dd415a4e4]\n 13: /lib64/libstdc++.so.6(+0xdbae4) [0x7f0dceec6ae4]\n 14: /lib64/libc.so.6(+0x8b2fa) [0x7f0dd66592fa]\n 15: /lib64/libc.so.6(+0x110400) [0x7f0dd66de400]\n"
Dec 13 04:09:23 compute-0 nova_compute[243704]: 2025-12-13 04:09:23.773 243708 ERROR oslo_messaging.rpc.server 
Dec 13 04:09:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Dec 13 04:09:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Dec 13 04:09:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Dec 13 04:09:23 compute-0 ceph-mon[75071]: pgmap v925: 305 pgs: 305 active+clean; 121 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 639 B/s wr, 20 op/s
Dec 13 04:09:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Dec 13 04:09:24 compute-0 ceph-mon[75071]: osdmap e165: 3 total, 3 up, 3 in
Dec 13 04:09:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Dec 13 04:09:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Dec 13 04:09:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 10 KiB/s wr, 50 op/s
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.348 243708 DEBUG oslo_concurrency.lockutils [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.349 243708 DEBUG oslo_concurrency.lockutils [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.365 243708 INFO nova.compute.manager [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Detaching volume eaf4ef51-7928-40d8-8afc-17d213082723
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.495 243708 INFO nova.virt.block_device [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Attempting to driver detach volume eaf4ef51-7928-40d8-8afc-17d213082723 from mountpoint /dev/vdb
Dec 13 04:09:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e166 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.544236) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965544366, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 645, "num_deletes": 255, "total_data_size": 644928, "memory_usage": 656656, "flush_reason": "Manual Compaction"}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965550591, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 517006, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18657, "largest_seqno": 19301, "table_properties": {"data_size": 513734, "index_size": 1116, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8485, "raw_average_key_size": 20, "raw_value_size": 506917, "raw_average_value_size": 1230, "num_data_blocks": 48, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598936, "oldest_key_time": 1765598936, "file_creation_time": 1765598965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 6431 microseconds, and 2844 cpu microseconds.
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.550679) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 517006 bytes OK
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.550700) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.552787) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.552825) EVENT_LOG_v1 {"time_micros": 1765598965552806, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.552849) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 641394, prev total WAL file size 641394, number of live WAL files 2.
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.553379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353032' seq:72057594037927935, type:22 .. '6D67727374617400373534' seq:0, type:0; will stop at (end)
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(504KB)], [41(9520KB)]
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965553456, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 10266074, "oldest_snapshot_seqno": -1}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4459 keys, 7027711 bytes, temperature: kUnknown
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965600797, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 7027711, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6997664, "index_size": 17777, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11205, "raw_key_size": 108892, "raw_average_key_size": 24, "raw_value_size": 6916811, "raw_average_value_size": 1551, "num_data_blocks": 743, "num_entries": 4459, "num_filter_entries": 4459, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765598965, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.601138) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 7027711 bytes
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.602680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.3 rd, 148.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.3 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(33.4) write-amplify(13.6) OK, records in: 4973, records dropped: 514 output_compression: NoCompression
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.602713) EVENT_LOG_v1 {"time_micros": 1765598965602698, "job": 20, "event": "compaction_finished", "compaction_time_micros": 47454, "compaction_time_cpu_micros": 19371, "output_level": 6, "num_output_files": 1, "total_output_size": 7027711, "num_input_records": 4973, "num_output_records": 4459, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965602902, "job": 20, "event": "table_file_deletion", "file_number": 43}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765598965604625, "job": 20, "event": "table_file_deletion", "file_number": 41}
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.553308) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.604738) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.604746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.604749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.604752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:09:25.604755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.626 243708 DEBUG os_brick.encryptors [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Using volume encryption metadata '{'encryption_key_id': '9a1a842c-948c-4f8d-8f4a-bbafc251168f', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '143ff7a5-b045-4330-945a-cab9a1074156', 'attached_at': '', 'detached_at': '', 'volume_id': 'eaf4ef51-7928-40d8-8afc-17d213082723', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.636 243708 DEBUG nova.virt.libvirt.driver [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Attempting to detach device vdb from instance 143ff7a5-b045-4330-945a-cab9a1074156 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.637 243708 DEBUG nova.virt.libvirt.guest [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723">
Dec 13 04:09:25 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   </source>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <serial>eaf4ef51-7928-40d8-8afc-17d213082723</serial>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:09:25 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="e3e9a3c6-2778-4ae7-ae6c-0a1504dadf10"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:09:25 compute-0 nova_compute[243704]: </disk>
Dec 13 04:09:25 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.645 243708 INFO nova.virt.libvirt.driver [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Successfully detached device vdb from instance 143ff7a5-b045-4330-945a-cab9a1074156 from the persistent domain config.
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.646 243708 DEBUG nova.virt.libvirt.driver [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 143ff7a5-b045-4330-945a-cab9a1074156 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.647 243708 DEBUG nova.virt.libvirt.guest [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-eaf4ef51-7928-40d8-8afc-17d213082723">
Dec 13 04:09:25 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   </source>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <serial>eaf4ef51-7928-40d8-8afc-17d213082723</serial>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:09:25 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="e3e9a3c6-2778-4ae7-ae6c-0a1504dadf10"/>
Dec 13 04:09:25 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:09:25 compute-0 nova_compute[243704]: </disk>
Dec 13 04:09:25 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.757 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765598965.7565963, 143ff7a5-b045-4330-945a-cab9a1074156 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.758 243708 DEBUG nova.virt.libvirt.driver [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 143ff7a5-b045-4330-945a-cab9a1074156 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:09:25 compute-0 nova_compute[243704]: 2025-12-13 04:09:25.760 243708 INFO nova.virt.libvirt.driver [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Successfully detached device vdb from instance 143ff7a5-b045-4330-945a-cab9a1074156 from the live domain config.
Dec 13 04:09:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Dec 13 04:09:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Dec 13 04:09:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Dec 13 04:09:26 compute-0 ceph-mon[75071]: osdmap e166: 3 total, 3 up, 3 in
Dec 13 04:09:26 compute-0 ceph-mon[75071]: pgmap v928: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 10 KiB/s wr, 50 op/s
Dec 13 04:09:26 compute-0 nova_compute[243704]: 2025-12-13 04:09:26.143 243708 DEBUG nova.objects.instance [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'flavor' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:09:26 compute-0 nova_compute[243704]: 2025-12-13 04:09:26.182 243708 DEBUG oslo_concurrency.lockutils [None req-4d20c8b9-bc34-460a-9256-7f27fc72e5c8 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:26 compute-0 nova_compute[243704]: 2025-12-13 04:09:26.522 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:26 compute-0 podman[250604]: 2025-12-13 04:09:26.904978125 +0000 UTC m=+0.054631228 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:09:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 9.1 KiB/s wr, 43 op/s
Dec 13 04:09:27 compute-0 nova_compute[243704]: 2025-12-13 04:09:27.677 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:27 compute-0 ceph-mon[75071]: osdmap e167: 3 total, 3 up, 3 in
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.395 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.396 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.396 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.396 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.397 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.398 243708 INFO nova.compute.manager [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Terminating instance
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.398 243708 DEBUG nova.compute.manager [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:09:28 compute-0 kernel: tap1cc0804c-13 (unregistering): left promiscuous mode
Dec 13 04:09:28 compute-0 NetworkManager[48899]: <info>  [1765598968.4457] device (tap1cc0804c-13): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.455 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 ovn_controller[145204]: 2025-12-13T04:09:28Z|00033|binding|INFO|Releasing lport 1cc0804c-1371-48e5-a964-354f99f7eace from this chassis (sb_readonly=0)
Dec 13 04:09:28 compute-0 ovn_controller[145204]: 2025-12-13T04:09:28Z|00034|binding|INFO|Setting lport 1cc0804c-1371-48e5-a964-354f99f7eace down in Southbound
Dec 13 04:09:28 compute-0 ovn_controller[145204]: 2025-12-13T04:09:28Z|00035|binding|INFO|Removing iface tap1cc0804c-13 ovn-installed in OVS
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.458 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.466 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:f9:75 10.100.0.6'], port_security=['fa:16:3e:32:f9:75 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '143ff7a5-b045-4330-945a-cab9a1074156', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '96989182ef434b49aedf94176f4ddd6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '29f547ce-3dec-403d-aee0-387394e47410', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4365dfe4-bb45-4bfa-b597-162b137c7810, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=1cc0804c-1371-48e5-a964-354f99f7eace) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.470 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 1cc0804c-1371-48e5-a964-354f99f7eace in datapath d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 unbound from our chassis
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.472 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.474 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a36f222b-38a0-41d1-9c34-6144165452a7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.474 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 namespace which is not needed anymore
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.485 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec 13 04:09:28 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 19.643s CPU time.
Dec 13 04:09:28 compute-0 systemd-machined[206767]: Machine qemu-1-instance-00000001 terminated.
Dec 13 04:09:28 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [NOTICE]   (249750) : haproxy version is 2.8.14-c23fe91
Dec 13 04:09:28 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [NOTICE]   (249750) : path to executable is /usr/sbin/haproxy
Dec 13 04:09:28 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [WARNING]  (249750) : Exiting Master process...
Dec 13 04:09:28 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [ALERT]    (249750) : Current worker (249752) exited with code 143 (Terminated)
Dec 13 04:09:28 compute-0 neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73[249746]: [WARNING]  (249750) : All workers exited. Exiting... (0)
Dec 13 04:09:28 compute-0 systemd[1]: libpod-442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f.scope: Deactivated successfully.
Dec 13 04:09:28 compute-0 podman[250648]: 2025-12-13 04:09:28.627501134 +0000 UTC m=+0.053172089 container died 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.633 243708 INFO nova.virt.libvirt.driver [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Instance destroyed successfully.
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.634 243708 DEBUG nova.objects.instance [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lazy-loading 'resources' on Instance uuid 143ff7a5-b045-4330-945a-cab9a1074156 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f-userdata-shm.mount: Deactivated successfully.
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.651 243708 DEBUG nova.virt.libvirt.vif [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:08:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',display_name='tempest-EncryptedVolumesExtendAttachedTest-instance-1936735685',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-encryptedvolumesextendattachedtest-instance-1936735685',id=1,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHsVPv3Q3QmSERXw6wBHyojoi58ygBNRPbUX5Jiszo88WR5vuVwf9fb/eGYHEl8SzOTu3kq+/kG9FHsKCXW7n7qmr52loi4dv4wJ7B4jcrZfDznFoQokZ4oC87/EJuL0wQ==',key_name='tempest-keypair-1094499998',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:08:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='96989182ef434b49aedf94176f4ddd6f',ramdisk_id='',reservation_id='r-zbfnpxg1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-EncryptedVolumesExtendAttachedTest-709780275',owner_user_name='tempest-EncryptedVolumesExtendAttachedTest-709780275-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:08:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='41c24c5943904540a40a3dfbcc716adb',uuid=143ff7a5-b045-4330-945a-cab9a1074156,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.652 243708 DEBUG nova.network.os_vif_util [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converting VIF {"id": "1cc0804c-1371-48e5-a964-354f99f7eace", "address": "fa:16:3e:32:f9:75", "network": {"id": "d0ec29d2-698f-48c0-8337-ad9b2cdc9d73", "bridge": "br-int", "label": "tempest-EncryptedVolumesExtendAttachedTest-246120551-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "96989182ef434b49aedf94176f4ddd6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1cc0804c-13", "ovs_interfaceid": "1cc0804c-1371-48e5-a964-354f99f7eace", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.653 243708 DEBUG nova.network.os_vif_util [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.653 243708 DEBUG os_vif [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.656 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.656 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1cc0804c-13, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.660 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.661 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:09:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-f908f5f281e119406a654c1f752cc85e1220b327500a28c04af8baaaf7c7272f-merged.mount: Deactivated successfully.
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.665 243708 INFO os_vif [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:f9:75,bridge_name='br-int',has_traffic_filtering=True,id=1cc0804c-1371-48e5-a964-354f99f7eace,network=Network(d0ec29d2-698f-48c0-8337-ad9b2cdc9d73),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1cc0804c-13')
Dec 13 04:09:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767351350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2767351350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:28 compute-0 podman[250648]: 2025-12-13 04:09:28.6754922 +0000 UTC m=+0.101163155 container cleanup 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:09:28 compute-0 systemd[1]: libpod-conmon-442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f.scope: Deactivated successfully.
Dec 13 04:09:28 compute-0 podman[250706]: 2025-12-13 04:09:28.747541791 +0000 UTC m=+0.041464499 container remove 442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.754 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1c075cbd-827b-4bd5-b556-ce86c5946581]: (4, ('Sat Dec 13 04:09:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 (442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f)\n442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f\nSat Dec 13 04:09:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 (442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f)\n442f0b6ef3f7ef3db33cf1122a8d4ebaaaad084006317dafbee027ac64caa68f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.757 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[40ae62a4-7fcb-433c-a0b4-e5ecc31511cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.758 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0ec29d2-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:09:28 compute-0 kernel: tapd0ec29d2-60: left promiscuous mode
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.761 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.774 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.778 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4196c93b-a03b-4a0b-bc12-23c934658bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.794 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2079637f-f465-4b76-8b4d-88ff4783a176]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.795 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7a650c57-0ae7-430d-a781-14787f3b7473]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.800 243708 DEBUG nova.compute.manager [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-unplugged-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.801 243708 DEBUG oslo_concurrency.lockutils [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.802 243708 DEBUG oslo_concurrency.lockutils [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.802 243708 DEBUG oslo_concurrency.lockutils [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.802 243708 DEBUG nova.compute.manager [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] No waiting events found dispatching network-vif-unplugged-1cc0804c-1371-48e5-a964-354f99f7eace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:09:28 compute-0 nova_compute[243704]: 2025-12-13 04:09:28.803 243708 DEBUG nova.compute.manager [req-0bcdb750-da23-4b70-9ed0-029b7a0c3168 req-587d34d2-2e1b-4287-8db4-8da267ae7937 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-unplugged-1cc0804c-1371-48e5-a964-354f99f7eace for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.811 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f35d8a3e-e48a-4e1c-b2b5-33ffcae03e80]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 365847, 'reachable_time': 42060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250725, 'error': None, 'target': 'ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 systemd[1]: run-netns-ovnmeta\x2dd0ec29d2\x2d698f\x2d48c0\x2d8337\x2dad9b2cdc9d73.mount: Deactivated successfully.
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.825 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0ec29d2-698f-48c0-8337-ad9b2cdc9d73 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:09:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:28.826 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[07113700-427f-478f-ab59-c8b8f5931a54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:09:28 compute-0 ceph-mon[75071]: pgmap v930: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 9.1 KiB/s wr, 43 op/s
Dec 13 04:09:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2767351350' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2767351350' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.112 243708 INFO nova.virt.libvirt.driver [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Deleting instance files /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156_del
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.112 243708 INFO nova.virt.libvirt.driver [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Deletion of /var/lib/nova/instances/143ff7a5-b045-4330-945a-cab9a1074156_del complete
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.215 243708 DEBUG nova.virt.libvirt.host [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.216 243708 INFO nova.virt.libvirt.host [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] UEFI support detected
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.219 243708 INFO nova.compute.manager [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Took 0.82 seconds to destroy the instance on the hypervisor.
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.219 243708 DEBUG oslo.service.loopingcall [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.219 243708 DEBUG nova.compute.manager [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:09:29 compute-0 nova_compute[243704]: 2025-12-13 04:09:29.220 243708 DEBUG nova.network.neutron [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:09:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 13 KiB/s wr, 117 op/s
Dec 13 04:09:30 compute-0 ceph-mon[75071]: pgmap v931: 305 pgs: 305 active+clean; 121 MiB data, 279 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 13 KiB/s wr, 117 op/s
Dec 13 04:09:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.863 243708 DEBUG nova.network.neutron [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.883 243708 INFO nova.compute.manager [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Took 1.66 seconds to deallocate network for instance.
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.992 243708 DEBUG nova.compute.manager [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.994 243708 DEBUG oslo_concurrency.lockutils [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "143ff7a5-b045-4330-945a-cab9a1074156-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.994 243708 DEBUG oslo_concurrency.lockutils [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.994 243708 DEBUG oslo_concurrency.lockutils [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.994 243708 DEBUG nova.compute.manager [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] No waiting events found dispatching network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:09:30 compute-0 nova_compute[243704]: 2025-12-13 04:09:30.995 243708 WARNING nova.compute.manager [req-0c2d36e7-afa2-4787-bfa8-5b52eb6115e3 req-bfd6d2d3-ac8f-4d6f-b521-5a4c82fea34e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received unexpected event network-vif-plugged-1cc0804c-1371-48e5-a964-354f99f7eace for instance with vm_state active and task_state deleting.
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.046 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.047 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.109 243708 DEBUG oslo_concurrency.processutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:09:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 95 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.4 KiB/s wr, 79 op/s
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.523 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:09:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/452654964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.689 243708 DEBUG oslo_concurrency.processutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.697 243708 DEBUG nova.compute.provider_tree [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.711 243708 DEBUG nova.scheduler.client.report [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.747 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.700s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.799 243708 INFO nova.scheduler.client.report [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Deleted allocations for instance 143ff7a5-b045-4330-945a-cab9a1074156
Dec 13 04:09:31 compute-0 nova_compute[243704]: 2025-12-13 04:09:31.886 243708 DEBUG oslo_concurrency.lockutils [None req-4485dc09-2830-4772-bd75-abceccc81530 41c24c5943904540a40a3dfbcc716adb 96989182ef434b49aedf94176f4ddd6f - - default default] Lock "143ff7a5-b045-4330-945a-cab9a1074156" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Dec 13 04:09:32 compute-0 ceph-mon[75071]: pgmap v932: 305 pgs: 305 active+clean; 95 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 5.4 KiB/s wr, 79 op/s
Dec 13 04:09:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/452654964' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:09:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Dec 13 04:09:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Dec 13 04:09:33 compute-0 nova_compute[243704]: 2025-12-13 04:09:33.249 243708 DEBUG nova.compute.manager [req-7363bf6a-a1a7-4499-98de-c1e3c8649a10 req-57f75ec5-7562-4dc8-9def-d35ca4addcc5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Received event network-vif-deleted-1cc0804c-1371-48e5-a964-354f99f7eace external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:09:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 95 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 72 op/s
Dec 13 04:09:33 compute-0 ceph-mon[75071]: osdmap e168: 3 total, 3 up, 3 in
Dec 13 04:09:33 compute-0 nova_compute[243704]: 2025-12-13 04:09:33.661 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197056332' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4197056332' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:34 compute-0 ceph-mon[75071]: pgmap v934: 305 pgs: 305 active+clean; 95 MiB data, 258 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 3.7 KiB/s wr, 72 op/s
Dec 13 04:09:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4197056332' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4197056332' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:35.084 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:09:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:35.085 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:09:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:35.085 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:09:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 6.1 KiB/s wr, 98 op/s
Dec 13 04:09:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Dec 13 04:09:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Dec 13 04:09:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Dec 13 04:09:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Dec 13 04:09:36 compute-0 ceph-mon[75071]: pgmap v935: 305 pgs: 305 active+clean; 42 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 70 KiB/s rd, 6.1 KiB/s wr, 98 op/s
Dec 13 04:09:36 compute-0 ceph-mon[75071]: osdmap e169: 3 total, 3 up, 3 in
Dec 13 04:09:36 compute-0 nova_compute[243704]: 2025-12-13 04:09:36.554 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Dec 13 04:09:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Dec 13 04:09:36 compute-0 podman[250752]: 2025-12-13 04:09:36.910301805 +0000 UTC m=+0.059339606 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:09:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.5 KiB/s wr, 55 op/s
Dec 13 04:09:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1252197997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1252197997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:37 compute-0 ceph-mon[75071]: osdmap e170: 3 total, 3 up, 3 in
Dec 13 04:09:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1252197997' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1252197997' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:38 compute-0 nova_compute[243704]: 2025-12-13 04:09:38.417 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1442316908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1442316908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:38 compute-0 nova_compute[243704]: 2025-12-13 04:09:38.564 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:38 compute-0 ceph-mon[75071]: pgmap v938: 305 pgs: 305 active+clean; 42 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 4.5 KiB/s wr, 55 op/s
Dec 13 04:09:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1442316908' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1442316908' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:38 compute-0 nova_compute[243704]: 2025-12-13 04:09:38.663 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.3 KiB/s wr, 163 op/s
Dec 13 04:09:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:40.394 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:09:40 compute-0 nova_compute[243704]: 2025-12-13 04:09:40.395 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:40 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:40.395 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:09:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:09:40
Dec 13 04:09:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:09:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:09:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.mgr', 'images', 'default.rgw.control', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'vms']
Dec 13 04:09:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:09:40 compute-0 ceph-mon[75071]: pgmap v939: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 8.3 KiB/s wr, 163 op/s
Dec 13 04:09:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 7.4 KiB/s wr, 165 op/s
Dec 13 04:09:41 compute-0 nova_compute[243704]: 2025-12-13 04:09:41.557 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:09:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:09:42 compute-0 ceph-mon[75071]: pgmap v940: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 122 KiB/s rd, 7.4 KiB/s wr, 165 op/s
Dec 13 04:09:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 4.0 KiB/s wr, 123 op/s
Dec 13 04:09:43 compute-0 nova_compute[243704]: 2025-12-13 04:09:43.633 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765598968.6320105, 143ff7a5-b045-4330-945a-cab9a1074156 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:09:43 compute-0 nova_compute[243704]: 2025-12-13 04:09:43.633 243708 INFO nova.compute.manager [-] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] VM Stopped (Lifecycle Event)
Dec 13 04:09:43 compute-0 nova_compute[243704]: 2025-12-13 04:09:43.648 243708 DEBUG nova.compute.manager [None req-fb1c42f2-de27-4c03-8693-a3460d26a5a6 - - - - - -] [instance: 143ff7a5-b045-4330-945a-cab9a1074156] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:09:43 compute-0 nova_compute[243704]: 2025-12-13 04:09:43.666 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353040317' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2353040317' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:44 compute-0 ceph-mon[75071]: pgmap v941: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 95 KiB/s rd, 4.0 KiB/s wr, 123 op/s
Dec 13 04:09:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2353040317' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2353040317' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469295140' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2469295140' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Dec 13 04:09:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:09:45.398 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:09:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e170 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Dec 13 04:09:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Dec 13 04:09:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Dec 13 04:09:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2469295140' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2469295140' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:45 compute-0 ceph-mon[75071]: osdmap e171: 3 total, 3 up, 3 in
Dec 13 04:09:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2125296366' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2125296366' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:45 compute-0 podman[250774]: 2025-12-13 04:09:45.943213679 +0000 UTC m=+0.091955824 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 13 04:09:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145001252' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/145001252' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:46 compute-0 nova_compute[243704]: 2025-12-13 04:09:46.558 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:46 compute-0 ceph-mon[75071]: pgmap v942: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 4.4 KiB/s wr, 104 op/s
Dec 13 04:09:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2125296366' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2125296366' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/145001252' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/145001252' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Dec 13 04:09:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652193459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3652193459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:48 compute-0 ceph-mon[75071]: pgmap v944: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 4.3 KiB/s wr, 102 op/s
Dec 13 04:09:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3652193459' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3652193459' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:48 compute-0 nova_compute[243704]: 2025-12-13 04:09:48.670 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 KiB/s wr, 82 op/s
Dec 13 04:09:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:50 compute-0 ceph-mon[75071]: pgmap v945: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.6 KiB/s wr, 82 op/s
Dec 13 04:09:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 KiB/s wr, 76 op/s
Dec 13 04:09:51 compute-0 nova_compute[243704]: 2025-12-13 04:09:51.559 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.1909285148714501e-07 of space, bias 1.0, pg target 3.5727855446143506e-05 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 1.9367220749991353e-06 of space, bias 1.0, pg target 0.0005810166224997406 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.58958253526995e-07 of space, bias 1.0, pg target 7.76874760580985e-05 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658777079327913 of space, bias 1.0, pg target 0.1997633123798374 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0653772321212818e-06 of space, bias 4.0, pg target 0.0012784526785455381 quantized to 16 (current 16)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:09:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:09:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Dec 13 04:09:52 compute-0 ceph-mon[75071]: pgmap v946: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.5 KiB/s wr, 76 op/s
Dec 13 04:09:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Dec 13 04:09:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Dec 13 04:09:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 KiB/s wr, 91 op/s
Dec 13 04:09:53 compute-0 nova_compute[243704]: 2025-12-13 04:09:53.672 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Dec 13 04:09:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Dec 13 04:09:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Dec 13 04:09:53 compute-0 ceph-mon[75071]: osdmap e172: 3 total, 3 up, 3 in
Dec 13 04:09:53 compute-0 ceph-mon[75071]: pgmap v948: 305 pgs: 305 active+clean; 41 MiB data, 232 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.7 KiB/s wr, 91 op/s
Dec 13 04:09:54 compute-0 ceph-mon[75071]: osdmap e173: 3 total, 3 up, 3 in
Dec 13 04:09:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 157 op/s
Dec 13 04:09:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:09:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3353314775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3353314775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:55 compute-0 ceph-mon[75071]: pgmap v950: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 157 op/s
Dec 13 04:09:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3353314775' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3353314775' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652112193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2652112193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:56 compute-0 nova_compute[243704]: 2025-12-13 04:09:56.561 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2652112193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2652112193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/397162029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/397162029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 82 op/s
Dec 13 04:09:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Dec 13 04:09:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/397162029' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/397162029' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:57 compute-0 ceph-mon[75071]: pgmap v951: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.7 MiB/s wr, 82 op/s
Dec 13 04:09:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Dec 13 04:09:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Dec 13 04:09:57 compute-0 podman[250802]: 2025-12-13 04:09:57.90818373 +0000 UTC m=+0.058728439 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Dec 13 04:09:58 compute-0 nova_compute[243704]: 2025-12-13 04:09:58.675 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:09:58 compute-0 ceph-mon[75071]: osdmap e174: 3 total, 3 up, 3 in
Dec 13 04:09:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:09:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/516727171' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:09:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/516727171' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 3.3 MiB/s wr, 230 op/s
Dec 13 04:09:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Dec 13 04:09:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Dec 13 04:09:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/516727171' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:09:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/516727171' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:09:59 compute-0 ceph-mon[75071]: pgmap v953: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 165 KiB/s rd, 3.3 MiB/s wr, 230 op/s
Dec 13 04:09:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Dec 13 04:10:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:00 compute-0 ceph-mon[75071]: osdmap e175: 3 total, 3 up, 3 in
Dec 13 04:10:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 954 KiB/s wr, 168 op/s
Dec 13 04:10:01 compute-0 nova_compute[243704]: 2025-12-13 04:10:01.853 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:01 compute-0 ceph-mon[75071]: pgmap v955: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 954 KiB/s wr, 168 op/s
Dec 13 04:10:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439044466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1439044466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:02 compute-0 sudo[250822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:10:02 compute-0 sudo[250822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:02 compute-0 sudo[250822]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:02 compute-0 sudo[250847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:10:02 compute-0 sudo[250847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1439044466' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1439044466' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:02 compute-0 sudo[250847]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:03 compute-0 sudo[250904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:10:03 compute-0 sudo[250904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:03 compute-0 sudo[250904]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:03 compute-0 sudo[250929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:10:03 compute-0 sudo[250929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.2 KiB/s wr, 124 op/s
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.518798603 +0000 UTC m=+0.057901906 container create 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:10:03 compute-0 systemd[1]: Started libpod-conmon-2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955.scope.
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.487346567 +0000 UTC m=+0.026449890 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.646072258 +0000 UTC m=+0.185175561 container init 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.65386557 +0000 UTC m=+0.192968833 container start 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.659967736 +0000 UTC m=+0.199071189 container attach 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:10:03 compute-0 serene_babbage[250981]: 167 167
Dec 13 04:10:03 compute-0 systemd[1]: libpod-2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955.scope: Deactivated successfully.
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.666762971 +0000 UTC m=+0.205866234 container died 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:03 compute-0 nova_compute[243704]: 2025-12-13 04:10:03.678 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-22b2753d167c33a688c10626d2ca519172033e4ffdd66707b284761c9f87d67b-merged.mount: Deactivated successfully.
Dec 13 04:10:03 compute-0 podman[250965]: 2025-12-13 04:10:03.7148497 +0000 UTC m=+0.253952963 container remove 2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_babbage, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:10:03 compute-0 systemd[1]: libpod-conmon-2b7e404ccf4d7a60c717366f1ba4bc27a3976cc79f591cf1a08bcbabbcc90955.scope: Deactivated successfully.
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094883094' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2094883094' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:03 compute-0 podman[251005]: 2025-12-13 04:10:03.898346475 +0000 UTC m=+0.048345767 container create a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:10:03 compute-0 systemd[1]: Started libpod-conmon-a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70.scope.
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: pgmap v956: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.2 KiB/s wr, 124 op/s
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2094883094' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2094883094' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:03 compute-0 podman[251005]: 2025-12-13 04:10:03.875963406 +0000 UTC m=+0.025962728 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:04 compute-0 podman[251005]: 2025-12-13 04:10:04.020183351 +0000 UTC m=+0.170182663 container init a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:04 compute-0 podman[251005]: 2025-12-13 04:10:04.028409365 +0000 UTC m=+0.178408667 container start a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:10:04 compute-0 podman[251005]: 2025-12-13 04:10:04.034989985 +0000 UTC m=+0.184989267 container attach a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:10:04 compute-0 eager_grothendieck[251021]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:10:04 compute-0 eager_grothendieck[251021]: --> All data devices are unavailable
Dec 13 04:10:04 compute-0 systemd[1]: libpod-a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70.scope: Deactivated successfully.
Dec 13 04:10:04 compute-0 podman[251005]: 2025-12-13 04:10:04.497684129 +0000 UTC m=+0.647683421 container died a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-f91eb12af4ca9b985b3aa51b531030fea4471f948bb6d15d3df0dbd263a63369-merged.mount: Deactivated successfully.
Dec 13 04:10:04 compute-0 podman[251005]: 2025-12-13 04:10:04.556796088 +0000 UTC m=+0.706795380 container remove a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:10:04 compute-0 systemd[1]: libpod-conmon-a61bab7577c5bba3e658f61d0ffab4308d1163019e81b2775a530c99d3232e70.scope: Deactivated successfully.
Dec 13 04:10:04 compute-0 sudo[250929]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:04 compute-0 sudo[251055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:10:04 compute-0 sudo[251055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:04 compute-0 sudo[251055]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:04 compute-0 sudo[251080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:10:04 compute-0 sudo[251080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.045007688 +0000 UTC m=+0.042890289 container create e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:10:05 compute-0 systemd[1]: Started libpod-conmon-e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff.scope.
Dec 13 04:10:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.024392407 +0000 UTC m=+0.022275068 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.121924822 +0000 UTC m=+0.119807443 container init e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.128792688 +0000 UTC m=+0.126675299 container start e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:10:05 compute-0 festive_cerf[251133]: 167 167
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.132095128 +0000 UTC m=+0.129977739 container attach e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:10:05 compute-0 systemd[1]: libpod-e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff.scope: Deactivated successfully.
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.146469609 +0000 UTC m=+0.144352230 container died e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-736fd9e737ff43f4f5d9e1c5a8d12ab2adc29a18c99056a23f69436c4cf6a7d6-merged.mount: Deactivated successfully.
Dec 13 04:10:05 compute-0 podman[251117]: 2025-12-13 04:10:05.184237978 +0000 UTC m=+0.182120579 container remove e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_cerf, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:05 compute-0 systemd[1]: libpod-conmon-e70c8cde21e081d5caba574323a2c710ad82adcda8c444f533bedb379a4cd3ff.scope: Deactivated successfully.
Dec 13 04:10:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 399 KiB/s wr, 261 op/s
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.379906094 +0000 UTC m=+0.056061987 container create fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:10:05 compute-0 systemd[1]: Started libpod-conmon-fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad.scope.
Dec 13 04:10:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23916bf6ee3dc14ab3b76faa3f30e2cb7d2fe0affe819b91d1b2b5f27f6826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23916bf6ee3dc14ab3b76faa3f30e2cb7d2fe0affe819b91d1b2b5f27f6826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23916bf6ee3dc14ab3b76faa3f30e2cb7d2fe0affe819b91d1b2b5f27f6826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd23916bf6ee3dc14ab3b76faa3f30e2cb7d2fe0affe819b91d1b2b5f27f6826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.359881828 +0000 UTC m=+0.036037771 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.459914021 +0000 UTC m=+0.136069934 container init fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.467755075 +0000 UTC m=+0.143910988 container start fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.47198568 +0000 UTC m=+0.148141573 container attach fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:10:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793898044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3793898044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e175 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Dec 13 04:10:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Dec 13 04:10:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]: {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     "0": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "devices": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "/dev/loop3"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             ],
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_name": "ceph_lv0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_size": "21470642176",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "name": "ceph_lv0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "tags": {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.crush_device_class": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.encrypted": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_id": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.vdo": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.with_tpm": "0"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             },
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "vg_name": "ceph_vg0"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         }
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     ],
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     "1": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "devices": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "/dev/loop4"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             ],
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_name": "ceph_lv1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_size": "21470642176",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "name": "ceph_lv1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "tags": {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.crush_device_class": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.encrypted": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_id": "1",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.vdo": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.with_tpm": "0"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             },
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "vg_name": "ceph_vg1"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         }
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     ],
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     "2": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "devices": [
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "/dev/loop5"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             ],
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_name": "ceph_lv2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_size": "21470642176",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "name": "ceph_lv2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "tags": {
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.crush_device_class": "",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.encrypted": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osd_id": "2",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.vdo": "0",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:                 "ceph.with_tpm": "0"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             },
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "type": "block",
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:             "vg_name": "ceph_vg2"
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:         }
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]:     ]
Dec 13 04:10:05 compute-0 jolly_mclaren[251174]: }
Dec 13 04:10:05 compute-0 systemd[1]: libpod-fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad.scope: Deactivated successfully.
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.768680646 +0000 UTC m=+0.444836539 container died fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:10:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd23916bf6ee3dc14ab3b76faa3f30e2cb7d2fe0affe819b91d1b2b5f27f6826-merged.mount: Deactivated successfully.
Dec 13 04:10:05 compute-0 podman[251157]: 2025-12-13 04:10:05.817517976 +0000 UTC m=+0.493673869 container remove fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_mclaren, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:10:05 compute-0 systemd[1]: libpod-conmon-fbf905574c3fa9f556a3be7ef0ea407cf2cfd50dab9beb8afc2be8556fcf7dad.scope: Deactivated successfully.
Dec 13 04:10:05 compute-0 sudo[251080]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:05 compute-0 sudo[251195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:10:05 compute-0 sudo[251195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:05 compute-0 sudo[251195]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:06 compute-0 sudo[251220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:10:06 compute-0 sudo[251220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.319201472 +0000 UTC m=+0.041931913 container create 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:06 compute-0 systemd[1]: Started libpod-conmon-4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140.scope.
Dec 13 04:10:06 compute-0 ceph-mon[75071]: pgmap v957: 305 pgs: 305 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 399 KiB/s wr, 261 op/s
Dec 13 04:10:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3793898044' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3793898044' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.300515113 +0000 UTC m=+0.023245564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:06 compute-0 ceph-mon[75071]: osdmap e176: 3 total, 3 up, 3 in
Dec 13 04:10:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.419953704 +0000 UTC m=+0.142684165 container init 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.429165685 +0000 UTC m=+0.151896116 container start 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.433322138 +0000 UTC m=+0.156052609 container attach 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:10:06 compute-0 nervous_swirles[251273]: 167 167
Dec 13 04:10:06 compute-0 systemd[1]: libpod-4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140.scope: Deactivated successfully.
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.437666596 +0000 UTC m=+0.160397047 container died 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:10:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-330cc8b6c2127052aa148755f10e3e5dd4fcb4e1488811750d051e5aa4f27d6e-merged.mount: Deactivated successfully.
Dec 13 04:10:06 compute-0 podman[251257]: 2025-12-13 04:10:06.477787958 +0000 UTC m=+0.200518389 container remove 4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_swirles, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:10:06 compute-0 systemd[1]: libpod-conmon-4bdf0a01d6546850a5a8a348fcab6dab4aa1d52234cf8eff4873c35771404140.scope: Deactivated successfully.
Dec 13 04:10:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Dec 13 04:10:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Dec 13 04:10:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Dec 13 04:10:06 compute-0 podman[251295]: 2025-12-13 04:10:06.64681259 +0000 UTC m=+0.046285252 container create db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:10:06 compute-0 systemd[1]: Started libpod-conmon-db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f.scope.
Dec 13 04:10:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bb1048f7ae07a9c8319ec1ce5224b8eea7f8ce641c1c404457a060ef6a098e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bb1048f7ae07a9c8319ec1ce5224b8eea7f8ce641c1c404457a060ef6a098e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bb1048f7ae07a9c8319ec1ce5224b8eea7f8ce641c1c404457a060ef6a098e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bb1048f7ae07a9c8319ec1ce5224b8eea7f8ce641c1c404457a060ef6a098e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:06 compute-0 podman[251295]: 2025-12-13 04:10:06.716657371 +0000 UTC m=+0.116130043 container init db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:10:06 compute-0 podman[251295]: 2025-12-13 04:10:06.722533061 +0000 UTC m=+0.122005723 container start db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:10:06 compute-0 podman[251295]: 2025-12-13 04:10:06.628510211 +0000 UTC m=+0.027982923 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:10:06 compute-0 podman[251295]: 2025-12-13 04:10:06.72655193 +0000 UTC m=+0.126024592 container attach db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:10:06 compute-0 nova_compute[243704]: 2025-12-13 04:10:06.854 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 422 KiB/s wr, 147 op/s
Dec 13 04:10:07 compute-0 lvm[251390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:10:07 compute-0 lvm[251395]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:10:07 compute-0 lvm[251390]: VG ceph_vg0 finished
Dec 13 04:10:07 compute-0 lvm[251395]: VG ceph_vg1 finished
Dec 13 04:10:07 compute-0 lvm[251400]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:10:07 compute-0 lvm[251400]: VG ceph_vg2 finished
Dec 13 04:10:07 compute-0 lvm[251410]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:10:07 compute-0 lvm[251410]: VG ceph_vg0 finished
Dec 13 04:10:07 compute-0 lvm[251416]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:10:07 compute-0 lvm[251416]: VG ceph_vg2 finished
Dec 13 04:10:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Dec 13 04:10:07 compute-0 confident_murdock[251311]: {}
Dec 13 04:10:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Dec 13 04:10:07 compute-0 podman[251386]: 2025-12-13 04:10:07.57038839 +0000 UTC m=+0.098532774 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:10:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Dec 13 04:10:07 compute-0 ceph-mon[75071]: osdmap e177: 3 total, 3 up, 3 in
Dec 13 04:10:07 compute-0 lvm[251417]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:10:07 compute-0 lvm[251417]: VG ceph_vg2 finished
Dec 13 04:10:07 compute-0 systemd[1]: libpod-db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f.scope: Deactivated successfully.
Dec 13 04:10:07 compute-0 systemd[1]: libpod-db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f.scope: Consumed 1.418s CPU time.
Dec 13 04:10:07 compute-0 podman[251295]: 2025-12-13 04:10:07.60823627 +0000 UTC m=+1.007708942 container died db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:10:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5bb1048f7ae07a9c8319ec1ce5224b8eea7f8ce641c1c404457a060ef6a098e-merged.mount: Deactivated successfully.
Dec 13 04:10:07 compute-0 podman[251295]: 2025-12-13 04:10:07.663598497 +0000 UTC m=+1.063071179 container remove db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_murdock, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:10:07 compute-0 systemd[1]: libpod-conmon-db0786458ab4457dc6d90b453125afcad929486730373383c03d0f4fb116591f.scope: Deactivated successfully.
Dec 13 04:10:07 compute-0 sudo[251220]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:10:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:10:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:07 compute-0 sudo[251431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:10:07 compute-0 sudo[251431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:10:07 compute-0 sudo[251431]: pam_unix(sudo:session): session closed for user root
Dec 13 04:10:08 compute-0 ceph-mon[75071]: pgmap v960: 305 pgs: 305 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 422 KiB/s wr, 147 op/s
Dec 13 04:10:08 compute-0 ceph-mon[75071]: osdmap e178: 3 total, 3 up, 3 in
Dec 13 04:10:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.685 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Dec 13 04:10:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Dec 13 04:10:08 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.897 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.897 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.897 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.919 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.920 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.920 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.920 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:10:08 compute-0 nova_compute[243704]: 2025-12-13 04:10:08.920 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 4.6 MiB/s wr, 134 op/s
Dec 13 04:10:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3033594996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.472 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.552s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.662 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.664 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4667MB free_disk=59.988274165429175GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.664 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.664 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.719 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.720 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:10:09 compute-0 nova_compute[243704]: 2025-12-13 04:10:09.735 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Dec 13 04:10:09 compute-0 ceph-mon[75071]: osdmap e179: 3 total, 3 up, 3 in
Dec 13 04:10:09 compute-0 ceph-mon[75071]: pgmap v963: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 92 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 93 KiB/s rd, 4.6 MiB/s wr, 134 op/s
Dec 13 04:10:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3033594996' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Dec 13 04:10:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Dec 13 04:10:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3825311396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:10 compute-0 nova_compute[243704]: 2025-12-13 04:10:10.259 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:10 compute-0 nova_compute[243704]: 2025-12-13 04:10:10.265 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:10:10 compute-0 nova_compute[243704]: 2025-12-13 04:10:10.282 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:10:10 compute-0 nova_compute[243704]: 2025-12-13 04:10:10.351 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:10:10 compute-0 nova_compute[243704]: 2025-12-13 04:10:10.352 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.687s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:10 compute-0 ceph-mon[75071]: osdmap e180: 3 total, 3 up, 3 in
Dec 13 04:10:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3825311396' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2336495437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2336495437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.331 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.332 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.332 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.333 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.333 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:10:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Dec 13 04:10:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2336495437' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2336495437' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:11 compute-0 ceph-mon[75071]: pgmap v965: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 3.8 MiB/s wr, 123 op/s
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.856 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:11 compute-0 nova_compute[243704]: 2025-12-13 04:10:11.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:12 compute-0 nova_compute[243704]: 2025-12-13 04:10:12.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:10:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.0 MiB/s wr, 98 op/s
Dec 13 04:10:13 compute-0 nova_compute[243704]: 2025-12-13 04:10:13.689 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:14 compute-0 ceph-mon[75071]: pgmap v966: 305 pgs: 8 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 295 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 3.0 MiB/s wr, 98 op/s
Dec 13 04:10:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 2.4 MiB/s wr, 133 op/s
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Dec 13 04:10:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e181 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Dec 13 04:10:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310392124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1310392124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:16 compute-0 ceph-mon[75071]: pgmap v967: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 2.4 MiB/s wr, 133 op/s
Dec 13 04:10:16 compute-0 ceph-mon[75071]: osdmap e181: 3 total, 3 up, 3 in
Dec 13 04:10:16 compute-0 ceph-mon[75071]: osdmap e182: 3 total, 3 up, 3 in
Dec 13 04:10:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1310392124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1310392124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Dec 13 04:10:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Dec 13 04:10:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Dec 13 04:10:16 compute-0 nova_compute[243704]: 2025-12-13 04:10:16.858 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:16 compute-0 podman[251501]: 2025-12-13 04:10:16.932297569 +0000 UTC m=+0.080901236 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 04:10:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.8 KiB/s wr, 74 op/s
Dec 13 04:10:17 compute-0 ceph-mon[75071]: osdmap e183: 3 total, 3 up, 3 in
Dec 13 04:10:18 compute-0 nova_compute[243704]: 2025-12-13 04:10:18.693 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:18 compute-0 ceph-mon[75071]: pgmap v971: 305 pgs: 305 active+clean; 88 MiB data, 253 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.8 KiB/s wr, 74 op/s
Dec 13 04:10:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635184360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3635184360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.3 KiB/s wr, 114 op/s
Dec 13 04:10:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3635184360' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3635184360' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:19 compute-0 ceph-mon[75071]: pgmap v972: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 4.3 KiB/s wr, 114 op/s
Dec 13 04:10:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.570808) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020570864, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 973, "num_deletes": 260, "total_data_size": 1177115, "memory_usage": 1205920, "flush_reason": "Manual Compaction"}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020582944, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1163578, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19302, "largest_seqno": 20274, "table_properties": {"data_size": 1158565, "index_size": 2473, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11111, "raw_average_key_size": 19, "raw_value_size": 1148293, "raw_average_value_size": 2028, "num_data_blocks": 109, "num_entries": 566, "num_filter_entries": 566, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765598966, "oldest_key_time": 1765598966, "file_creation_time": 1765599020, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 12191 microseconds, and 3729 cpu microseconds.
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.583002) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1163578 bytes OK
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.583024) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.584939) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.584956) EVENT_LOG_v1 {"time_micros": 1765599020584952, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.585009) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1172240, prev total WAL file size 1172240, number of live WAL files 2.
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.585514) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1136KB)], [44(6862KB)]
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020585592, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 8191289, "oldest_snapshot_seqno": -1}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4491 keys, 8070040 bytes, temperature: kUnknown
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020628652, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 8070040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8037962, "index_size": 19744, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11269, "raw_key_size": 111043, "raw_average_key_size": 24, "raw_value_size": 7954723, "raw_average_value_size": 1771, "num_data_blocks": 823, "num_entries": 4491, "num_filter_entries": 4491, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599020, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.629282) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 8070040 bytes
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.639234) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 189.8 rd, 187.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 6.7 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(14.0) write-amplify(6.9) OK, records in: 5025, records dropped: 534 output_compression: NoCompression
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.639272) EVENT_LOG_v1 {"time_micros": 1765599020639258, "job": 22, "event": "compaction_finished", "compaction_time_micros": 43156, "compaction_time_cpu_micros": 18900, "output_level": 6, "num_output_files": 1, "total_output_size": 8070040, "num_input_records": 5025, "num_output_records": 4491, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020639912, "job": 22, "event": "table_file_deletion", "file_number": 46}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599020641605, "job": 22, "event": "table_file_deletion", "file_number": 44}
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.585440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.641689) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.641703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.641704) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.641706) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:10:20.641707) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:10:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 86 op/s
Dec 13 04:10:21 compute-0 nova_compute[243704]: 2025-12-13 04:10:21.861 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:22 compute-0 ceph-mon[75071]: pgmap v973: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 3.2 KiB/s wr, 86 op/s
Dec 13 04:10:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 65 op/s
Dec 13 04:10:23 compute-0 nova_compute[243704]: 2025-12-13 04:10:23.696 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:25 compute-0 ceph-mon[75071]: pgmap v974: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 49 KiB/s rd, 2.4 KiB/s wr, 65 op/s
Dec 13 04:10:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Dec 13 04:10:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Dec 13 04:10:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068549324' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3068549324' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Dec 13 04:10:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Dec 13 04:10:26 compute-0 ceph-mon[75071]: pgmap v975: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.8 KiB/s wr, 64 op/s
Dec 13 04:10:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3068549324' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3068549324' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:26 compute-0 ceph-mon[75071]: osdmap e184: 3 total, 3 up, 3 in
Dec 13 04:10:26 compute-0 nova_compute[243704]: 2025-12-13 04:10:26.880 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.869 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.870 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.897 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.987 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.988 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.995 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:10:27 compute-0 nova_compute[243704]: 2025-12-13 04:10:27.996 243708 INFO nova.compute.claims [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.079 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:28 compute-0 ceph-mon[75071]: pgmap v977: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 KiB/s wr, 62 op/s
Dec 13 04:10:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:10:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2294891014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.636 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.646 243708 DEBUG nova.compute.provider_tree [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.667 243708 DEBUG nova.scheduler.client.report [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.700 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.724 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.726 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.885 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.886 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:10:28 compute-0 podman[251549]: 2025-12-13 04:10:28.921888164 +0000 UTC m=+0.069549779 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.942 243708 INFO nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:10:28 compute-0 nova_compute[243704]: 2025-12-13 04:10:28.988 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.189 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.190 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.191 243708 INFO nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Creating image(s)
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.219 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.245 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.270 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.275 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.301 243708 DEBUG nova.policy [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'de1aafde9d2140d980c61f6583078e45', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c3d14ae134004022846080df2141ba48', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:10:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.355 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.356 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.357 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.358 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.379 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.383 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2294891014' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:10:29 compute-0 nova_compute[243704]: 2025-12-13 04:10:29.936 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.011 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] resizing rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.083 243708 DEBUG nova.objects.instance [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'migration_context' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.093 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.094 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Ensure instance console log exists: /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.094 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.094 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.095 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:30 compute-0 nova_compute[243704]: 2025-12-13 04:10:30.553 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Successfully created port: 074f5411-8798-446e-b452-7d76b42c954d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:10:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e184 do_prune osdmap full prune enabled
Dec 13 04:10:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e185 e185: 3 total, 3 up, 3 in
Dec 13 04:10:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e185: 3 total, 3 up, 3 in
Dec 13 04:10:30 compute-0 ceph-mon[75071]: pgmap v978: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Dec 13 04:10:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e185 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.271 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Successfully updated port: 074f5411-8798-446e-b452-7d76b42c954d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.284 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.285 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquired lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.285 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:10:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v980: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.9 KiB/s wr, 24 op/s
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.464 243708 DEBUG nova.compute.manager [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-changed-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.464 243708 DEBUG nova.compute.manager [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Refreshing instance network info cache due to event network-changed-074f5411-8798-446e-b452-7d76b42c954d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.465 243708 DEBUG oslo_concurrency.lockutils [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:10:31 compute-0 ceph-mon[75071]: osdmap e185: 3 total, 3 up, 3 in
Dec 13 04:10:31 compute-0 ceph-mon[75071]: pgmap v980: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.9 KiB/s wr, 24 op/s
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.882 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:31 compute-0 nova_compute[243704]: 2025-12-13 04:10:31.937 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.595 243708 DEBUG nova.network.neutron [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updating instance_info_cache with network_info: [{"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.610 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Releasing lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.610 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Instance network_info: |[{"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.611 243708 DEBUG oslo_concurrency.lockutils [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.611 243708 DEBUG nova.network.neutron [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Refreshing network info cache for port 074f5411-8798-446e-b452-7d76b42c954d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.614 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Start _get_guest_xml network_info=[{"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.619 243708 WARNING nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.628 243708 DEBUG nova.virt.libvirt.host [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.628 243708 DEBUG nova.virt.libvirt.host [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.632 243708 DEBUG nova.virt.libvirt.host [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.633 243708 DEBUG nova.virt.libvirt.host [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.634 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.634 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.634 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.635 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.635 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.635 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.635 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.636 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.636 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.636 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.636 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.637 243708 DEBUG nova.virt.hardware [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:10:32 compute-0 nova_compute[243704]: 2025-12-13 04:10:32.640 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e185 do_prune osdmap full prune enabled
Dec 13 04:10:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e186 e186: 3 total, 3 up, 3 in
Dec 13 04:10:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e186: 3 total, 3 up, 3 in
Dec 13 04:10:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:10:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1285027007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.221 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.250 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.255 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 968 B/s wr, 12 op/s
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.567 243708 DEBUG nova.network.neutron [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updated VIF entry in instance network info cache for port 074f5411-8798-446e-b452-7d76b42c954d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.568 243708 DEBUG nova.network.neutron [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updating instance_info_cache with network_info: [{"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.586 243708 DEBUG oslo_concurrency.lockutils [req-0e4a073d-a619-4d56-bb46-e225a219be44 req-4bcbbe51-0e01-4b5a-98e4-8f3e39be74f4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:10:33 compute-0 ceph-mon[75071]: osdmap e186: 3 total, 3 up, 3 in
Dec 13 04:10:33 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1285027007' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:33 compute-0 ceph-mon[75071]: pgmap v982: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 968 B/s wr, 12 op/s
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.748 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:10:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2689232088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.791 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.793 243708 DEBUG nova.virt.libvirt.vif [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-964029391',display_name='tempest-VolumesExtendAttachedTest-instance-964029391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-964029391',id=2,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI5UexJ8AwCWnAIh+sfHwZbK+1SEG8ijIqvyI44vqlKd3ExkvLU7c7ZGfD9nIM/8cm/LYKl3LRJusT1xZJ25hV98ScoKWAcBeBs2cLBKuv0K7VrP3NVZPgMp6dG5vMNLag==',key_name='tempest-keypair-1992151067',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3d14ae134004022846080df2141ba48',ramdisk_id='',reservation_id='r-i02zeskj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-2109239140',owner_user_name='tempest-VolumesExtendAttachedTest-2109239140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:10:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de1aafde9d2140d980c61f6583078e45',uuid=4deaddf6-080d-4685-aeeb-41d5dff923fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.795 243708 DEBUG nova.network.os_vif_util [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converting VIF {"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.797 243708 DEBUG nova.network.os_vif_util [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.798 243708 DEBUG nova.objects.instance [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.810 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <uuid>4deaddf6-080d-4685-aeeb-41d5dff923fd</uuid>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <name>instance-00000002</name>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesExtendAttachedTest-instance-964029391</nova:name>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:10:32</nova:creationTime>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:user uuid="de1aafde9d2140d980c61f6583078e45">tempest-VolumesExtendAttachedTest-2109239140-project-member</nova:user>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:project uuid="c3d14ae134004022846080df2141ba48">tempest-VolumesExtendAttachedTest-2109239140</nova:project>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <nova:port uuid="074f5411-8798-446e-b452-7d76b42c954d">
Dec 13 04:10:33 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <system>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="serial">4deaddf6-080d-4685-aeeb-41d5dff923fd</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="uuid">4deaddf6-080d-4685-aeeb-41d5dff923fd</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </system>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <os>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </os>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <features>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </features>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/4deaddf6-080d-4685-aeeb-41d5dff923fd_disk">
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </source>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config">
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </source>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:10:33 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:cb:07:6b"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <target dev="tap074f5411-87"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/console.log" append="off"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <video>
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </video>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:10:33 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:10:33 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:10:33 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:10:33 compute-0 nova_compute[243704]: </domain>
Dec 13 04:10:33 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.812 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Preparing to wait for external event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.812 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.813 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.813 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.814 243708 DEBUG nova.virt.libvirt.vif [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-964029391',display_name='tempest-VolumesExtendAttachedTest-instance-964029391',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-964029391',id=2,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI5UexJ8AwCWnAIh+sfHwZbK+1SEG8ijIqvyI44vqlKd3ExkvLU7c7ZGfD9nIM/8cm/LYKl3LRJusT1xZJ25hV98ScoKWAcBeBs2cLBKuv0K7VrP3NVZPgMp6dG5vMNLag==',key_name='tempest-keypair-1992151067',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c3d14ae134004022846080df2141ba48',ramdisk_id='',reservation_id='r-i02zeskj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesExtendAttachedTest-2109239140',owner_user_name='tempest-VolumesExtendAttachedTest-2109239140-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:10:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de1aafde9d2140d980c61f6583078e45',uuid=4deaddf6-080d-4685-aeeb-41d5dff923fd,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.814 243708 DEBUG nova.network.os_vif_util [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converting VIF {"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.815 243708 DEBUG nova.network.os_vif_util [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.815 243708 DEBUG os_vif [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.816 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.816 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.817 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.821 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.821 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap074f5411-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.822 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap074f5411-87, col_values=(('external_ids', {'iface-id': '074f5411-8798-446e-b452-7d76b42c954d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cb:07:6b', 'vm-uuid': '4deaddf6-080d-4685-aeeb-41d5dff923fd'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.823 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:33 compute-0 NetworkManager[48899]: <info>  [1765599033.8253] manager: (tap074f5411-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.826 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.831 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.831 243708 INFO os_vif [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87')
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.879 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.880 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.880 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No VIF found with MAC fa:16:3e:cb:07:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.880 243708 INFO nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Using config drive
Dec 13 04:10:33 compute-0 nova_compute[243704]: 2025-12-13 04:10:33.904 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.273 243708 INFO nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Creating config drive at /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.280 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr4wa777s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.412 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr4wa777s" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.456 243708 DEBUG nova.storage.rbd_utils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] rbd image 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.461 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.602 243708 DEBUG oslo_concurrency.processutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config 4deaddf6-080d-4685-aeeb-41d5dff923fd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.603 243708 INFO nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Deleting local config drive /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd/disk.config because it was imported into RBD.
Dec 13 04:10:34 compute-0 kernel: tap074f5411-87: entered promiscuous mode
Dec 13 04:10:34 compute-0 NetworkManager[48899]: <info>  [1765599034.6679] manager: (tap074f5411-87): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec 13 04:10:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1172560170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.668 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:34 compute-0 ovn_controller[145204]: 2025-12-13T04:10:34Z|00036|binding|INFO|Claiming lport 074f5411-8798-446e-b452-7d76b42c954d for this chassis.
Dec 13 04:10:34 compute-0 ovn_controller[145204]: 2025-12-13T04:10:34Z|00037|binding|INFO|074f5411-8798-446e-b452-7d76b42c954d: Claiming fa:16:3e:cb:07:6b 10.100.0.14
Dec 13 04:10:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1172560170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.676 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.684 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:07:6b 10.100.0.14'], port_security=['fa:16:3e:cb:07:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4deaddf6-080d-4685-aeeb-41d5dff923fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3d14ae134004022846080df2141ba48', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e90564da-fa63-4583-afc4-6c2804c97930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e823bbb9-8968-4b84-bd4b-51ac190b120c, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=074f5411-8798-446e-b452-7d76b42c954d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.686 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 074f5411-8798-446e-b452-7d76b42c954d in datapath 366fdcef-6d1d-4ac6-b80f-1662f1648a35 bound to our chassis
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.688 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 366fdcef-6d1d-4ac6-b80f-1662f1648a35
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.704 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[834582a9-6300-476d-8773-ce8821c0fa11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.706 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap366fdcef-61 in ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:10:34 compute-0 systemd-udevd[251870]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:34 compute-0 systemd-machined[206767]: New machine qemu-2-instance-00000002.
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.709 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap366fdcef-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.709 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[878f4f97-1c70-4106-8802-4aa72e498e21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.710 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0a3147-e898-4bc3-bbef-ed67b004c2af]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 NetworkManager[48899]: <info>  [1765599034.7257] device (tap074f5411-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:10:34 compute-0 NetworkManager[48899]: <info>  [1765599034.7268] device (tap074f5411-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.726 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[f1714ba3-d0d4-473c-bbd2-8375dfcd2b71]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec 13 04:10:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2689232088' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1172560170' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1172560170' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.759 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[15962d37-6558-4357-83d7-adaa4ea7c489]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_controller[145204]: 2025-12-13T04:10:34Z|00038|binding|INFO|Setting lport 074f5411-8798-446e-b452-7d76b42c954d ovn-installed in OVS
Dec 13 04:10:34 compute-0 ovn_controller[145204]: 2025-12-13T04:10:34Z|00039|binding|INFO|Setting lport 074f5411-8798-446e-b452-7d76b42c954d up in Southbound
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.841 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.846 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[73b5d20f-91b4-4ca2-8402-172ab511ebac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 NetworkManager[48899]: <info>  [1765599034.8549] manager: (tap366fdcef-60): new Veth device (/org/freedesktop/NetworkManager/Devices/31)
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.854 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[221a5907-51d1-4126-ac85-b8c5fd912de6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 systemd-udevd[251873]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:10:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2357099794' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2357099794' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.888 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[d9792419-88b2-434c-8c4c-0a6b8a071f2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.892 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[f94a1392-7c19-4eb1-9418-e6972df99039]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 NetworkManager[48899]: <info>  [1765599034.9214] device (tap366fdcef-60): carrier: link connected
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.929 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[7d097619-4eb5-4819-b285-6225f19a94b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.945 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0930ca48-0a46-4954-86c3-0c360c7bfd3d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap366fdcef-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:01:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377360, 'reachable_time': 30241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251903, 'error': None, 'target': 'ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.965 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6c32d2aa-a415-4171-8e5a-425612249a97]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0a:12b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 377360, 'tstamp': 377360}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251904, 'error': None, 'target': 'ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:34.984 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[318d7844-7e88-44fc-9564-1a12bab044d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap366fdcef-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0a:01:2b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377360, 'reachable_time': 30241, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251905, 'error': None, 'target': 'ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.989 243708 DEBUG nova.compute.manager [req-f9bd3404-48f7-4751-acf6-470b0b612361 req-bc7bd1b2-2a91-44df-9acd-5930c08080d1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.990 243708 DEBUG oslo_concurrency.lockutils [req-f9bd3404-48f7-4751-acf6-470b0b612361 req-bc7bd1b2-2a91-44df-9acd-5930c08080d1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.990 243708 DEBUG oslo_concurrency.lockutils [req-f9bd3404-48f7-4751-acf6-470b0b612361 req-bc7bd1b2-2a91-44df-9acd-5930c08080d1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.990 243708 DEBUG oslo_concurrency.lockutils [req-f9bd3404-48f7-4751-acf6-470b0b612361 req-bc7bd1b2-2a91-44df-9acd-5930c08080d1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:34 compute-0 nova_compute[243704]: 2025-12-13 04:10:34.990 243708 DEBUG nova.compute.manager [req-f9bd3404-48f7-4751-acf6-470b0b612361 req-bc7bd1b2-2a91-44df-9acd-5930c08080d1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Processing event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.025 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2275fa21-f8cf-4a4c-a288-7a46c61ef0e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.085 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.086 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.086 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.101 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2dae99-d77c-4101-ae5f-5dcd634df3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.103 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap366fdcef-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.103 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.103 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap366fdcef-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:35 compute-0 kernel: tap366fdcef-60: entered promiscuous mode
Dec 13 04:10:35 compute-0 NetworkManager[48899]: <info>  [1765599035.1057] manager: (tap366fdcef-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.105 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.107 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap366fdcef-60, col_values=(('external_ids', {'iface-id': 'b9695540-be9c-4102-a527-35b3961a1395'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:35 compute-0 ovn_controller[145204]: 2025-12-13T04:10:35Z|00040|binding|INFO|Releasing lport b9695540-be9c-4102-a527-35b3961a1395 from this chassis (sb_readonly=0)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.124 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.125 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/366fdcef-6d1d-4ac6-b80f-1662f1648a35.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/366fdcef-6d1d-4ac6-b80f-1662f1648a35.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.126 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2eaea998-1ee5-4ac1-815e-d15657433d3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.127 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-366fdcef-6d1d-4ac6-b80f-1662f1648a35
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/366fdcef-6d1d-4ac6-b80f-1662f1648a35.pid.haproxy
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 366fdcef-6d1d-4ac6-b80f-1662f1648a35
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:10:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:35.128 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'env', 'PROCESS_TAG=haproxy-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/366fdcef-6d1d-4ac6-b80f-1662f1648a35.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:10:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.437 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599035.436624, 4deaddf6-080d-4685-aeeb-41d5dff923fd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.438 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] VM Started (Lifecycle Event)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.442 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.447 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.452 243708 INFO nova.virt.libvirt.driver [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Instance spawned successfully.
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.453 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.472 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.478 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.481 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.481 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.482 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.482 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.483 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.483 243708 DEBUG nova.virt.libvirt.driver [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.520 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.521 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599035.4371443, 4deaddf6-080d-4685-aeeb-41d5dff923fd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.521 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] VM Paused (Lifecycle Event)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.568 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:10:35 compute-0 podman[251978]: 2025-12-13 04:10:35.570617746 +0000 UTC m=+0.065389906 container create c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.582 243708 INFO nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Took 6.39 seconds to spawn the instance on the hypervisor.
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.582 243708 DEBUG nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.583 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599035.4467108, 4deaddf6-080d-4685-aeeb-41d5dff923fd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.583 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] VM Resumed (Lifecycle Event)
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.617 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.628 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:10:35 compute-0 podman[251978]: 2025-12-13 04:10:35.542507973 +0000 UTC m=+0.037280123 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:10:35 compute-0 systemd[1]: Started libpod-conmon-c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29.scope.
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.652 243708 INFO nova.compute.manager [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Took 7.70 seconds to build instance.
Dec 13 04:10:35 compute-0 nova_compute[243704]: 2025-12-13 04:10:35.666 243708 DEBUG oslo_concurrency.lockutils [None req-38f63cdb-7527-4d3a-a4d3-5bfc0802010c de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.796s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:10:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5946d45f21b2437f8970261c8054185a9ddc31abc7223949e909277b550eba17/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:10:35 compute-0 podman[251978]: 2025-12-13 04:10:35.702870336 +0000 UTC m=+0.197642486 container init c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:10:35 compute-0 podman[251978]: 2025-12-13 04:10:35.712857547 +0000 UTC m=+0.207629677 container start c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 04:10:35 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [NOTICE]   (251997) : New worker (251999) forked
Dec 13 04:10:35 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [NOTICE]   (251997) : Loading success.
Dec 13 04:10:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2357099794' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2357099794' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:35 compute-0 ceph-mon[75071]: pgmap v983: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.7 MiB/s wr, 76 op/s
Dec 13 04:10:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e186 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e186 do_prune osdmap full prune enabled
Dec 13 04:10:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e187 e187: 3 total, 3 up, 3 in
Dec 13 04:10:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e187: 3 total, 3 up, 3 in
Dec 13 04:10:36 compute-0 ceph-mon[75071]: osdmap e187: 3 total, 3 up, 3 in
Dec 13 04:10:36 compute-0 nova_compute[243704]: 2025-12-13 04:10:36.885 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.152 243708 DEBUG nova.compute.manager [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.153 243708 DEBUG oslo_concurrency.lockutils [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.153 243708 DEBUG oslo_concurrency.lockutils [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.153 243708 DEBUG oslo_concurrency.lockutils [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.154 243708 DEBUG nova.compute.manager [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] No waiting events found dispatching network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.154 243708 WARNING nova.compute.manager [req-ce9c3264-fd30-4578-ae2e-ff002339f7f2 req-c5e30fb2-4b77-4038-bb33-adddf6499837 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received unexpected event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d for instance with vm_state active and task_state None.
Dec 13 04:10:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1756908945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1756908945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 79 op/s
Dec 13 04:10:37 compute-0 NetworkManager[48899]: <info>  [1765599037.8298] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Dec 13 04:10:37 compute-0 NetworkManager[48899]: <info>  [1765599037.8310] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.840 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1756908945' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1756908945' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:37 compute-0 ceph-mon[75071]: pgmap v985: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 3.2 MiB/s wr, 79 op/s
Dec 13 04:10:37 compute-0 ovn_controller[145204]: 2025-12-13T04:10:37Z|00041|binding|INFO|Releasing lport b9695540-be9c-4102-a527-35b3961a1395 from this chassis (sb_readonly=0)
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.924 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:37 compute-0 nova_compute[243704]: 2025-12-13 04:10:37.931 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:37 compute-0 podman[252008]: 2025-12-13 04:10:37.941990194 +0000 UTC m=+0.071155172 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.197 243708 DEBUG nova.compute.manager [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-changed-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.198 243708 DEBUG nova.compute.manager [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Refreshing instance network info cache due to event network-changed-074f5411-8798-446e-b452-7d76b42c954d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.198 243708 DEBUG oslo_concurrency.lockutils [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.198 243708 DEBUG oslo_concurrency.lockutils [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.198 243708 DEBUG nova.network.neutron [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Refreshing network info cache for port 074f5411-8798-446e-b452-7d76b42c954d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:10:38 compute-0 nova_compute[243704]: 2025-12-13 04:10:38.824 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Dec 13 04:10:39 compute-0 nova_compute[243704]: 2025-12-13 04:10:39.986 243708 DEBUG nova.network.neutron [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updated VIF entry in instance network info cache for port 074f5411-8798-446e-b452-7d76b42c954d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:10:39 compute-0 nova_compute[243704]: 2025-12-13 04:10:39.987 243708 DEBUG nova.network.neutron [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updating instance_info_cache with network_info: [{"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:10:40 compute-0 nova_compute[243704]: 2025-12-13 04:10:40.233 243708 DEBUG oslo_concurrency.lockutils [req-5fd98bdb-1917-4420-a446-f6e0eaf39b0c req-77e1fd51-c67d-43ec-9506-1233685fc30d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-4deaddf6-080d-4685-aeeb-41d5dff923fd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:10:40 compute-0 ceph-mon[75071]: pgmap v986: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 210 op/s
Dec 13 04:10:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:10:40
Dec 13 04:10:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:10:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:10:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'default.rgw.meta', '.rgw.root', '.mgr', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'default.rgw.control']
Dec 13 04:10:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:10:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e187 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:41.077 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:10:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:41.078 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:10:41 compute-0 nova_compute[243704]: 2025-12-13 04:10:41.079 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v987: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 221 op/s
Dec 13 04:10:41 compute-0 nova_compute[243704]: 2025-12-13 04:10:41.887 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:10:42 compute-0 ceph-mon[75071]: pgmap v987: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.5 MiB/s wr, 221 op/s
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:10:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:10:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Dec 13 04:10:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e187 do_prune osdmap full prune enabled
Dec 13 04:10:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e188 e188: 3 total, 3 up, 3 in
Dec 13 04:10:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e188: 3 total, 3 up, 3 in
Dec 13 04:10:43 compute-0 nova_compute[243704]: 2025-12-13 04:10:43.829 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:44 compute-0 ceph-mon[75071]: pgmap v988: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 190 op/s
Dec 13 04:10:44 compute-0 ceph-mon[75071]: osdmap e188: 3 total, 3 up, 3 in
Dec 13 04:10:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2102794604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2102794604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v990: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 164 op/s
Dec 13 04:10:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2102794604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2102794604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471560491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/471560491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:46 compute-0 ceph-mon[75071]: pgmap v990: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 21 KiB/s wr, 164 op/s
Dec 13 04:10:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/471560491' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/471560491' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3703454977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3703454977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:46 compute-0 nova_compute[243704]: 2025-12-13 04:10:46.936 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:10:47.080 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:10:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 155 op/s
Dec 13 04:10:47 compute-0 ovn_controller[145204]: 2025-12-13T04:10:47Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cb:07:6b 10.100.0.14
Dec 13 04:10:47 compute-0 ovn_controller[145204]: 2025-12-13T04:10:47Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cb:07:6b 10.100.0.14
Dec 13 04:10:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3703454977' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3703454977' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4086524109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4086524109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:47 compute-0 podman[252028]: 2025-12-13 04:10:47.938977421 +0000 UTC m=+0.087855936 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:10:48 compute-0 ceph-mon[75071]: pgmap v991: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 20 KiB/s wr, 155 op/s
Dec 13 04:10:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4086524109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4086524109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:48 compute-0 nova_compute[243704]: 2025-12-13 04:10:48.832 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 843 KiB/s rd, 1.8 MiB/s wr, 124 op/s
Dec 13 04:10:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1974620361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1974620361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1974620361' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1974620361' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:50 compute-0 ceph-mon[75071]: pgmap v992: 305 pgs: 305 active+clean; 153 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 843 KiB/s rd, 1.8 MiB/s wr, 124 op/s
Dec 13 04:10:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e188 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 161 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 138 op/s
Dec 13 04:10:51 compute-0 nova_compute[243704]: 2025-12-13 04:10:51.939 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007548053694507251 of space, bias 1.0, pg target 0.22644161083521752 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003510305655920902 of space, bias 1.0, pg target 0.10530916967762706 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5689640948104386e-07 of space, bias 1.0, pg target 4.706892284431316e-05 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658724294072399 of space, bias 1.0, pg target 0.19976172882217197 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2281576450847435e-06 of space, bias 4.0, pg target 0.0014737891741016921 quantized to 16 (current 16)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:10:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:10:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e188 do_prune osdmap full prune enabled
Dec 13 04:10:52 compute-0 ceph-mon[75071]: pgmap v993: 305 pgs: 305 active+clean; 161 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 385 KiB/s rd, 2.5 MiB/s wr, 138 op/s
Dec 13 04:10:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 e189: 3 total, 3 up, 3 in
Dec 13 04:10:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e189: 3 total, 3 up, 3 in
Dec 13 04:10:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 161 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 140 op/s
Dec 13 04:10:53 compute-0 ceph-mon[75071]: osdmap e189: 3 total, 3 up, 3 in
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.835 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.891 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.892 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.906 243708 DEBUG nova.objects.instance [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'flavor' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.924 243708 INFO nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Ignoring supplied device name: /dev/vdb
Dec 13 04:10:53 compute-0 nova_compute[243704]: 2025-12-13 04:10:53.939 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.156 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.157 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.157 243708 INFO nova.compute.manager [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Attaching volume edac7b06-4371-4cf7-8269-9cad8ff0d24e to /dev/vdb
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.280 243708 DEBUG os_brick.utils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.281 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.291 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.291 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[713c5965-0bf0-4ab0-8407-e47c5f93144f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.292 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.300 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.300 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a0599528-2c0e-46d5-b1c7-aeb1781b1f7d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.303 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.311 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.311 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1c6b12bf-27d6-4cbe-879d-2460b4b967dc]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.312 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a287767d-0017-464d-815f-aa42e9f01184]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.313 243708 DEBUG oslo_concurrency.processutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.333 243708 DEBUG oslo_concurrency.processutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.335 243708 DEBUG os_brick.initiator.connectors.lightos [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.335 243708 DEBUG os_brick.initiator.connectors.lightos [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.335 243708 DEBUG os_brick.initiator.connectors.lightos [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.336 243708 DEBUG os_brick.utils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:10:54 compute-0 nova_compute[243704]: 2025-12-13 04:10:54.336 243708 DEBUG nova.virt.block_device [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updating existing volume attachment record: f1e6504c-0f9e-449a-b5bd-cfa3ea0e6b0f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:10:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2830653063' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2830653063' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:54 compute-0 ceph-mon[75071]: pgmap v995: 305 pgs: 305 active+clean; 161 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 392 KiB/s rd, 2.6 MiB/s wr, 140 op/s
Dec 13 04:10:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2830653063' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2830653063' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:10:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2705257743' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.147 243708 DEBUG nova.objects.instance [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'flavor' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.171 243708 DEBUG nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Attempting to attach volume edac7b06-4371-4cf7-8269-9cad8ff0d24e with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.174 243708 DEBUG nova.virt.libvirt.guest [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:10:55 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-edac7b06-4371-4cf7-8269-9cad8ff0d24e">
Dec 13 04:10:55 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   </source>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:10:55 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:10:55 compute-0 nova_compute[243704]:   <serial>edac7b06-4371-4cf7-8269-9cad8ff0d24e</serial>
Dec 13 04:10:55 compute-0 nova_compute[243704]: </disk>
Dec 13 04:10:55 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.274 243708 DEBUG nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.274 243708 DEBUG nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.275 243708 DEBUG nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.275 243708 DEBUG nova.virt.libvirt.driver [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] No VIF found with MAC fa:16:3e:cb:07:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:10:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3368620471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3368620471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Dec 13 04:10:55 compute-0 nova_compute[243704]: 2025-12-13 04:10:55.435 243708 DEBUG oslo_concurrency.lockutils [None req-51c07a87-551a-4e66-a9f0-92d1565abfd9 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.279s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2705257743' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:10:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3368620471' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3368620471' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:55 compute-0 ceph-mon[75071]: pgmap v996: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Dec 13 04:10:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:10:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3800543656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3800543656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3800543656' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3800543656' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:56 compute-0 nova_compute[243704]: 2025-12-13 04:10:56.941 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:57 compute-0 nova_compute[243704]: 2025-12-13 04:10:57.295 243708 DEBUG nova.compute.manager [req-8ccc60f1-73f4-4eb2-b6c7-f5a7500e75a0 req-65d2feae-76bb-41cc-9728-249a7b8b2707 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event volume-extended-edac7b06-4371-4cf7-8269-9cad8ff0d24e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:10:57 compute-0 nova_compute[243704]: 2025-12-13 04:10:57.308 243708 DEBUG nova.compute.manager [req-8ccc60f1-73f4-4eb2-b6c7-f5a7500e75a0 req-65d2feae-76bb-41cc-9728-249a7b8b2707 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Handling volume-extended event for volume edac7b06-4371-4cf7-8269-9cad8ff0d24e extend_volume /usr/lib/python3.9/site-packages/nova/compute/manager.py:10896
Dec 13 04:10:57 compute-0 nova_compute[243704]: 2025-12-13 04:10:57.320 243708 INFO nova.compute.manager [req-8ccc60f1-73f4-4eb2-b6c7-f5a7500e75a0 req-65d2feae-76bb-41cc-9728-249a7b8b2707 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Cinder extended volume edac7b06-4371-4cf7-8269-9cad8ff0d24e; extending it to detect new size
Dec 13 04:10:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Dec 13 04:10:57 compute-0 nova_compute[243704]: 2025-12-13 04:10:57.443 243708 DEBUG nova.virt.libvirt.driver [req-8ccc60f1-73f4-4eb2-b6c7-f5a7500e75a0 req-65d2feae-76bb-41cc-9728-249a7b8b2707 90dfd00980dc437489f6bbdcfd7d1f95 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Resizing target device vdb to 2147483648 _resize_attached_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2756
Dec 13 04:10:57 compute-0 ceph-mon[75071]: pgmap v997: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 455 KiB/s rd, 2.6 MiB/s wr, 167 op/s
Dec 13 04:10:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:10:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/557795208' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:10:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/557795208' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.645 243708 DEBUG oslo_concurrency.lockutils [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.646 243708 DEBUG oslo_concurrency.lockutils [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.657 243708 INFO nova.compute.manager [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Detaching volume edac7b06-4371-4cf7-8269-9cad8ff0d24e
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.784 243708 INFO nova.virt.block_device [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Attempting to driver detach volume edac7b06-4371-4cf7-8269-9cad8ff0d24e from mountpoint /dev/vdb
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.793 243708 DEBUG nova.virt.libvirt.driver [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Attempting to detach device vdb from instance 4deaddf6-080d-4685-aeeb-41d5dff923fd from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.793 243708 DEBUG nova.virt.libvirt.guest [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-edac7b06-4371-4cf7-8269-9cad8ff0d24e">
Dec 13 04:10:58 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   </source>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <serial>edac7b06-4371-4cf7-8269-9cad8ff0d24e</serial>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]: </disk>
Dec 13 04:10:58 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.802 243708 INFO nova.virt.libvirt.driver [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Successfully detached device vdb from instance 4deaddf6-080d-4685-aeeb-41d5dff923fd from the persistent domain config.
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.802 243708 DEBUG nova.virt.libvirt.driver [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 4deaddf6-080d-4685-aeeb-41d5dff923fd from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.803 243708 DEBUG nova.virt.libvirt.guest [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-edac7b06-4371-4cf7-8269-9cad8ff0d24e">
Dec 13 04:10:58 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   </source>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <serial>edac7b06-4371-4cf7-8269-9cad8ff0d24e</serial>
Dec 13 04:10:58 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:10:58 compute-0 nova_compute[243704]: </disk>
Dec 13 04:10:58 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.837 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.859 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599058.8586228, 4deaddf6-080d-4685-aeeb-41d5dff923fd => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.860 243708 DEBUG nova.virt.libvirt.driver [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 4deaddf6-080d-4685-aeeb-41d5dff923fd _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:10:58 compute-0 nova_compute[243704]: 2025-12-13 04:10:58.862 243708 INFO nova.virt.libvirt.driver [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Successfully detached device vdb from instance 4deaddf6-080d-4685-aeeb-41d5dff923fd from the live domain config.
Dec 13 04:10:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/557795208' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:10:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/557795208' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:10:59 compute-0 nova_compute[243704]: 2025-12-13 04:10:59.192 243708 DEBUG nova.objects.instance [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'flavor' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:10:59 compute-0 nova_compute[243704]: 2025-12-13 04:10:59.228 243708 DEBUG oslo_concurrency.lockutils [None req-d361761d-71c8-423e-b1a7-d573f178a918 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:10:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v998: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 778 KiB/s wr, 130 op/s
Dec 13 04:10:59 compute-0 podman[252084]: 2025-12-13 04:10:59.91181749 +0000 UTC m=+0.061003217 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:10:59 compute-0 ceph-mon[75071]: pgmap v998: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 229 KiB/s rd, 778 KiB/s wr, 130 op/s
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.092 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.093 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.093 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.093 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.094 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.095 243708 INFO nova.compute.manager [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Terminating instance
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.096 243708 DEBUG nova.compute.manager [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:11:00 compute-0 kernel: tap074f5411-87 (unregistering): left promiscuous mode
Dec 13 04:11:00 compute-0 NetworkManager[48899]: <info>  [1765599060.1810] device (tap074f5411-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:11:00 compute-0 ovn_controller[145204]: 2025-12-13T04:11:00Z|00042|binding|INFO|Releasing lport 074f5411-8798-446e-b452-7d76b42c954d from this chassis (sb_readonly=0)
Dec 13 04:11:00 compute-0 ovn_controller[145204]: 2025-12-13T04:11:00Z|00043|binding|INFO|Setting lport 074f5411-8798-446e-b452-7d76b42c954d down in Southbound
Dec 13 04:11:00 compute-0 ovn_controller[145204]: 2025-12-13T04:11:00Z|00044|binding|INFO|Removing iface tap074f5411-87 ovn-installed in OVS
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.189 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.191 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.197 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cb:07:6b 10.100.0.14'], port_security=['fa:16:3e:cb:07:6b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4deaddf6-080d-4685-aeeb-41d5dff923fd', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c3d14ae134004022846080df2141ba48', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e90564da-fa63-4583-afc4-6c2804c97930', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.241'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e823bbb9-8968-4b84-bd4b-51ac190b120c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=074f5411-8798-446e-b452-7d76b42c954d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.198 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 074f5411-8798-446e-b452-7d76b42c954d in datapath 366fdcef-6d1d-4ac6-b80f-1662f1648a35 unbound from our chassis
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.200 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 366fdcef-6d1d-4ac6-b80f-1662f1648a35, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.201 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e448a7c9-f3d1-44f7-ab36-7a1ce9831f67]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.202 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35 namespace which is not needed anymore
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.211 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec 13 04:11:00 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 14.384s CPU time.
Dec 13 04:11:00 compute-0 systemd-machined[206767]: Machine qemu-2-instance-00000002 terminated.
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.312 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.316 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.326 243708 INFO nova.virt.libvirt.driver [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Instance destroyed successfully.
Dec 13 04:11:00 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [NOTICE]   (251997) : haproxy version is 2.8.14-c23fe91
Dec 13 04:11:00 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [NOTICE]   (251997) : path to executable is /usr/sbin/haproxy
Dec 13 04:11:00 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [WARNING]  (251997) : Exiting Master process...
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.328 243708 DEBUG nova.objects.instance [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lazy-loading 'resources' on Instance uuid 4deaddf6-080d-4685-aeeb-41d5dff923fd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:11:00 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [ALERT]    (251997) : Current worker (251999) exited with code 143 (Terminated)
Dec 13 04:11:00 compute-0 neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35[251993]: [WARNING]  (251997) : All workers exited. Exiting... (0)
Dec 13 04:11:00 compute-0 systemd[1]: libpod-c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29.scope: Deactivated successfully.
Dec 13 04:11:00 compute-0 podman[252128]: 2025-12-13 04:11:00.337938807 +0000 UTC m=+0.047227243 container died c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.339 243708 DEBUG nova.virt.libvirt.vif [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:10:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesExtendAttachedTest-instance-964029391',display_name='tempest-VolumesExtendAttachedTest-instance-964029391',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesextendattachedtest-instance-964029391',id=2,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBI5UexJ8AwCWnAIh+sfHwZbK+1SEG8ijIqvyI44vqlKd3ExkvLU7c7ZGfD9nIM/8cm/LYKl3LRJusT1xZJ25hV98ScoKWAcBeBs2cLBKuv0K7VrP3NVZPgMp6dG5vMNLag==',key_name='tempest-keypair-1992151067',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:10:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c3d14ae134004022846080df2141ba48',ramdisk_id='',reservation_id='r-i02zeskj',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesExtendAttachedTest-2109239140',owner_user_name='tempest-VolumesExtendAttachedTest-2109239140-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:10:35Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='de1aafde9d2140d980c61f6583078e45',uuid=4deaddf6-080d-4685-aeeb-41d5dff923fd,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.340 243708 DEBUG nova.network.os_vif_util [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converting VIF {"id": "074f5411-8798-446e-b452-7d76b42c954d", "address": "fa:16:3e:cb:07:6b", "network": {"id": "366fdcef-6d1d-4ac6-b80f-1662f1648a35", "bridge": "br-int", "label": "tempest-VolumesExtendAttachedTest-942700958-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.241", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c3d14ae134004022846080df2141ba48", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap074f5411-87", "ovs_interfaceid": "074f5411-8798-446e-b452-7d76b42c954d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.341 243708 DEBUG nova.network.os_vif_util [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.341 243708 DEBUG os_vif [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.342 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.343 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap074f5411-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.345 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.347 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.349 243708 INFO os_vif [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cb:07:6b,bridge_name='br-int',has_traffic_filtering=True,id=074f5411-8798-446e-b452-7d76b42c954d,network=Network(366fdcef-6d1d-4ac6-b80f-1662f1648a35),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap074f5411-87')
Dec 13 04:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29-userdata-shm.mount: Deactivated successfully.
Dec 13 04:11:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5946d45f21b2437f8970261c8054185a9ddc31abc7223949e909277b550eba17-merged.mount: Deactivated successfully.
Dec 13 04:11:00 compute-0 podman[252128]: 2025-12-13 04:11:00.38707344 +0000 UTC m=+0.096361896 container cleanup c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:11:00 compute-0 systemd[1]: libpod-conmon-c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29.scope: Deactivated successfully.
Dec 13 04:11:00 compute-0 podman[252184]: 2025-12-13 04:11:00.487174937 +0000 UTC m=+0.058925431 container remove c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.493 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0926cea9-1c2e-4593-9572-5ed421d6213e]: (4, ('Sat Dec 13 04:11:00 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35 (c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29)\nc9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29\nSat Dec 13 04:11:00 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35 (c9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29)\nc9eceb078538fd6aae216d539cbab053ed13288d5fa85de45776887c71e8ec29\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.495 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[28653c29-94a1-4721-a3c1-82d2408e1c8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.496 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap366fdcef-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.497 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 kernel: tap366fdcef-60: left promiscuous mode
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.501 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2f8e1d92-89f8-4b14-aa15-657cf754a531]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.520 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[47e2a15f-a0a9-4261-b125-c9289127ffbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.521 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8b0367ea-cc0d-4c83-a514-040cc98038a0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.537 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[51c09f34-1f24-4d96-b80d-4185f712885f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 377351, 'reachable_time': 19266, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252199, 'error': None, 'target': 'ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.540 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-366fdcef-6d1d-4ac6-b80f-1662f1648a35 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:11:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:00.540 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5de738-7d58-4cf0-8ecf-26712347e4ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:00 compute-0 systemd[1]: run-netns-ovnmeta\x2d366fdcef\x2d6d1d\x2d4ac6\x2db80f\x2d1662f1648a35.mount: Deactivated successfully.
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.677 243708 INFO nova.virt.libvirt.driver [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Deleting instance files /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd_del
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.678 243708 INFO nova.virt.libvirt.driver [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Deletion of /var/lib/nova/instances/4deaddf6-080d-4685-aeeb-41d5dff923fd_del complete
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.723 243708 INFO nova.compute.manager [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Took 0.63 seconds to destroy the instance on the hypervisor.
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.724 243708 DEBUG oslo.service.loopingcall [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.724 243708 DEBUG nova.compute.manager [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.724 243708 DEBUG nova.network.neutron [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.802 243708 DEBUG nova.compute.manager [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-unplugged-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.802 243708 DEBUG oslo_concurrency.lockutils [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.805 243708 DEBUG oslo_concurrency.lockutils [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.805 243708 DEBUG oslo_concurrency.lockutils [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.805 243708 DEBUG nova.compute.manager [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] No waiting events found dispatching network-vif-unplugged-074f5411-8798-446e-b452-7d76b42c954d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:11:00 compute-0 nova_compute[243704]: 2025-12-13 04:11:00.805 243708 DEBUG nova.compute.manager [req-da7381b1-4e74-4ea8-8495-05b257df0efb req-732aa212-73ae-43f6-9956-a0b58ef7e800 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-unplugged-074f5411-8798-446e-b452-7d76b42c954d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:11:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 63 KiB/s wr, 106 op/s
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.588 243708 DEBUG nova.network.neutron [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.613 243708 INFO nova.compute.manager [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Took 0.89 seconds to deallocate network for instance.
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.669 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.669 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.694 243708 DEBUG nova.compute.manager [req-6d59eb2c-2299-45e4-a77f-e7769643da22 req-debb5484-8be3-445f-a1f2-01d7b4097791 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-deleted-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.744 243708 DEBUG oslo_concurrency.processutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:01 compute-0 nova_compute[243704]: 2025-12-13 04:11:01.944 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807697305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.304 243708 DEBUG oslo_concurrency.processutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.310 243708 DEBUG nova.compute.provider_tree [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.324 243708 DEBUG nova.scheduler.client.report [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.351 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.386 243708 INFO nova.scheduler.client.report [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Deleted allocations for instance 4deaddf6-080d-4685-aeeb-41d5dff923fd
Dec 13 04:11:02 compute-0 ceph-mon[75071]: pgmap v999: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 63 KiB/s wr, 106 op/s
Dec 13 04:11:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/807697305' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.450 243708 DEBUG oslo_concurrency.lockutils [None req-da662e16-b49f-4cec-a004-c846a289ff25 de1aafde9d2140d980c61f6583078e45 c3d14ae134004022846080df2141ba48 - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.357s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.899 243708 DEBUG nova.compute.manager [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.899 243708 DEBUG oslo_concurrency.lockutils [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.899 243708 DEBUG oslo_concurrency.lockutils [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.899 243708 DEBUG oslo_concurrency.lockutils [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4deaddf6-080d-4685-aeeb-41d5dff923fd-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.900 243708 DEBUG nova.compute.manager [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] No waiting events found dispatching network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:11:02 compute-0 nova_compute[243704]: 2025-12-13 04:11:02.900 243708 WARNING nova.compute.manager [req-2a9ea612-4633-45f3-ace8-2c3b183138ff req-1f392cb6-9b2a-4c73-9ff1-360f9cb13ef7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Received unexpected event network-vif-plugged-074f5411-8798-446e-b452-7d76b42c954d for instance with vm_state deleted and task_state None.
Dec 13 04:11:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 59 KiB/s wr, 99 op/s
Dec 13 04:11:04 compute-0 ceph-mon[75071]: pgmap v1000: 305 pgs: 305 active+clean; 167 MiB data, 300 MiB used, 60 GiB / 60 GiB avail; 119 KiB/s rd, 59 KiB/s wr, 99 op/s
Dec 13 04:11:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1621150869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1621150869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:05 compute-0 nova_compute[243704]: 2025-12-13 04:11:05.345 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 54 KiB/s wr, 124 op/s
Dec 13 04:11:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1621150869' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1621150869' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:06 compute-0 ceph-mon[75071]: pgmap v1001: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 54 KiB/s wr, 124 op/s
Dec 13 04:11:06 compute-0 nova_compute[243704]: 2025-12-13 04:11:06.947 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 KiB/s wr, 84 op/s
Dec 13 04:11:07 compute-0 sudo[252223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:11:07 compute-0 sudo[252223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:07 compute-0 sudo[252223]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:07 compute-0 sudo[252248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:11:07 compute-0 sudo[252248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:08 compute-0 sudo[252248]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:08 compute-0 ceph-mon[75071]: pgmap v1002: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 KiB/s wr, 84 op/s
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:08 compute-0 sudo[252304]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:11:08 compute-0 sudo[252304]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:08 compute-0 sudo[252304]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:08 compute-0 sudo[252335]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:11:08 compute-0 sudo[252335]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.715 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:08 compute-0 podman[252328]: 2025-12-13 04:11:08.72014206 +0000 UTC m=+0.076577109 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598578898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2598578898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.889 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:11:08 compute-0 nova_compute[243704]: 2025-12-13 04:11:08.890 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.015861187 +0000 UTC m=+0.046468292 container create 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.060 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:09 compute-0 systemd[1]: Started libpod-conmon-682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376.scope.
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:08.996391389 +0000 UTC m=+0.026998534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.109412846 +0000 UTC m=+0.140019971 container init 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.117984539 +0000 UTC m=+0.148591654 container start 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.121955036 +0000 UTC m=+0.152562161 container attach 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:11:09 compute-0 charming_cohen[252404]: 167 167
Dec 13 04:11:09 compute-0 systemd[1]: libpod-682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376.scope: Deactivated successfully.
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.125319619 +0000 UTC m=+0.155926754 container died 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-39b9ecaeb9bfe81805c75c5ae0e8887d005beb26b49359fa2f6a4793878ff9b7-merged.mount: Deactivated successfully.
Dec 13 04:11:09 compute-0 podman[252387]: 2025-12-13 04:11:09.177216307 +0000 UTC m=+0.207823462 container remove 682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:11:09 compute-0 systemd[1]: libpod-conmon-682025df1ff3e063efa342ea4c87fc9ae2d66cef33b59b0d7502b5d7d3a45376.scope: Deactivated successfully.
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.34496836 +0000 UTC m=+0.037576090 container create 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 115 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 144 op/s
Dec 13 04:11:09 compute-0 systemd[1]: Started libpod-conmon-7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b.scope.
Dec 13 04:11:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.329320995 +0000 UTC m=+0.021928725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.424717575 +0000 UTC m=+0.117325325 container init 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.4326666 +0000 UTC m=+0.125274330 container start 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.435424335 +0000 UTC m=+0.128032065 container attach 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2598578898' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2598578898' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:09 compute-0 keen_antonelli[252446]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:11:09 compute-0 keen_antonelli[252446]: --> All data devices are unavailable
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.906 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.907 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.907 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.908 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:11:09 compute-0 nova_compute[243704]: 2025-12-13 04:11:09.908 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:09 compute-0 systemd[1]: libpod-7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b.scope: Deactivated successfully.
Dec 13 04:11:09 compute-0 conmon[252446]: conmon 7a20cd49372abf1fa86d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b.scope/container/memory.events
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.939399055 +0000 UTC m=+0.632006785 container died 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:11:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e94c922f4371d1eb20aa05ca5e8c47b7b5a78eca6dcb4b3a6ae912006b38ad3-merged.mount: Deactivated successfully.
Dec 13 04:11:09 compute-0 podman[252429]: 2025-12-13 04:11:09.994549963 +0000 UTC m=+0.687157693 container remove 7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:11:10 compute-0 systemd[1]: libpod-conmon-7a20cd49372abf1fa86d0288d617a46f0821c05e08421eaed44a5cd6db70ab2b.scope: Deactivated successfully.
Dec 13 04:11:10 compute-0 sudo[252335]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/512717765' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/512717765' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:10 compute-0 sudo[252482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:11:10 compute-0 sudo[252482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:10 compute-0 sudo[252482]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:10 compute-0 sudo[252523]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:11:10 compute-0 sudo[252523]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.348 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.469871885 +0000 UTC m=+0.041435256 container create 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:11:10 compute-0 systemd[1]: Started libpod-conmon-7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca.scope.
Dec 13 04:11:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1264494302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:10 compute-0 ceph-mon[75071]: pgmap v1003: 305 pgs: 305 active+clean; 115 MiB data, 269 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.3 MiB/s wr, 144 op/s
Dec 13 04:11:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/512717765' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/512717765' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1264494302' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.536 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.628s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.449567243 +0000 UTC m=+0.021130604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.551360247 +0000 UTC m=+0.122923618 container init 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.559027934 +0000 UTC m=+0.130591285 container start 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.56252291 +0000 UTC m=+0.134086281 container attach 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:11:10 compute-0 epic_ellis[252578]: 167 167
Dec 13 04:11:10 compute-0 systemd[1]: libpod-7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca.scope: Deactivated successfully.
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.564899443 +0000 UTC m=+0.136462824 container died 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-da8aa574add509c207f798027827103b9d71abbede339d5a13a913adbedd8c6f-merged.mount: Deactivated successfully.
Dec 13 04:11:10 compute-0 podman[252561]: 2025-12-13 04:11:10.609751571 +0000 UTC m=+0.181314922 container remove 7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_ellis, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:11:10 compute-0 systemd[1]: libpod-conmon-7f39daf69e57e5adce7aaf1b31781a5ea0e075975826a2f630cfa94521299fca.scope: Deactivated successfully.
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.722 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.723 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4644MB free_disk=59.98825753014535GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.724 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.724 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.772 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.772 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:11:10 compute-0 nova_compute[243704]: 2025-12-13 04:11:10.791 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:10 compute-0 podman[252604]: 2025-12-13 04:11:10.793496689 +0000 UTC m=+0.049516226 container create c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:11:10 compute-0 systemd[1]: Started libpod-conmon-c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be.scope.
Dec 13 04:11:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace9fb827228bf8cb8460cce1e89948d454bc571613b899671a4962d25995a51/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace9fb827228bf8cb8460cce1e89948d454bc571613b899671a4962d25995a51/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace9fb827228bf8cb8460cce1e89948d454bc571613b899671a4962d25995a51/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ace9fb827228bf8cb8460cce1e89948d454bc571613b899671a4962d25995a51/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:10 compute-0 podman[252604]: 2025-12-13 04:11:10.773974179 +0000 UTC m=+0.029993746 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:10 compute-0 podman[252604]: 2025-12-13 04:11:10.886358539 +0000 UTC m=+0.142378106 container init c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:11:10 compute-0 podman[252604]: 2025-12-13 04:11:10.899171468 +0000 UTC m=+0.155191005 container start c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:10 compute-0 podman[252604]: 2025-12-13 04:11:10.904329398 +0000 UTC m=+0.160348935 container attach c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]: {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     "0": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "devices": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "/dev/loop3"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             ],
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_name": "ceph_lv0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_size": "21470642176",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "name": "ceph_lv0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "tags": {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_name": "ceph",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.crush_device_class": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.encrypted": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.objectstore": "bluestore",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_id": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.vdo": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.with_tpm": "0"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             },
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "vg_name": "ceph_vg0"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         }
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     ],
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     "1": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "devices": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "/dev/loop4"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             ],
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_name": "ceph_lv1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_size": "21470642176",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "name": "ceph_lv1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "tags": {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_name": "ceph",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.crush_device_class": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.encrypted": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.objectstore": "bluestore",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_id": "1",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.vdo": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.with_tpm": "0"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             },
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "vg_name": "ceph_vg1"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         }
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     ],
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     "2": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "devices": [
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "/dev/loop5"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             ],
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_name": "ceph_lv2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_size": "21470642176",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "name": "ceph_lv2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "tags": {
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.cluster_name": "ceph",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.crush_device_class": "",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.encrypted": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.objectstore": "bluestore",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osd_id": "2",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.vdo": "0",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:                 "ceph.with_tpm": "0"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             },
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "type": "block",
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:             "vg_name": "ceph_vg2"
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:         }
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]:     ]
Dec 13 04:11:11 compute-0 nostalgic_mendeleev[252621]: }
Dec 13 04:11:11 compute-0 systemd[1]: libpod-c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be.scope: Deactivated successfully.
Dec 13 04:11:11 compute-0 podman[252604]: 2025-12-13 04:11:11.210694843 +0000 UTC m=+0.466714380 container died c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ace9fb827228bf8cb8460cce1e89948d454bc571613b899671a4962d25995a51-merged.mount: Deactivated successfully.
Dec 13 04:11:11 compute-0 podman[252604]: 2025-12-13 04:11:11.257454523 +0000 UTC m=+0.513474060 container remove c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_mendeleev, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:11:11 compute-0 systemd[1]: libpod-conmon-c63755d1e424b82ff646195f59faa77d6b040596915ea0678c1d6946f03fb3be.scope: Deactivated successfully.
Dec 13 04:11:11 compute-0 sudo[252523]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3888490213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.352 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.358 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:11:11 compute-0 sudo[252662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:11:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 13 04:11:11 compute-0 sudo[252662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:11 compute-0 sudo[252662]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.372 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.391 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.391 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:11 compute-0 sudo[252689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:11:11 compute-0 sudo[252689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3888490213' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.724477499 +0000 UTC m=+0.045185707 container create e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:11:11 compute-0 systemd[1]: Started libpod-conmon-e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523.scope.
Dec 13 04:11:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.700105858 +0000 UTC m=+0.020814096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.800915914 +0000 UTC m=+0.121624122 container init e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.807488823 +0000 UTC m=+0.128197031 container start e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.81071145 +0000 UTC m=+0.131419688 container attach e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:11:11 compute-0 epic_torvalds[252741]: 167 167
Dec 13 04:11:11 compute-0 systemd[1]: libpod-e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523.scope: Deactivated successfully.
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.812150629 +0000 UTC m=+0.132858857 container died e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:11:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-5fcef8d55dcc3b9e8fa47bdf412a7f6866fe06dfff21f2006575ae0a07c339cb-merged.mount: Deactivated successfully.
Dec 13 04:11:11 compute-0 podman[252725]: 2025-12-13 04:11:11.84462348 +0000 UTC m=+0.165331678 container remove e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_torvalds, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:11:11 compute-0 systemd[1]: libpod-conmon-e942a362ff1f4adda3cc763e422f35aba798d24f37261bc3ef081293758e7523.scope: Deactivated successfully.
Dec 13 04:11:11 compute-0 nova_compute[243704]: 2025-12-13 04:11:11.949 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.002505406 +0000 UTC m=+0.041287652 container create bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:11:12 compute-0 systemd[1]: Started libpod-conmon-bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f.scope.
Dec 13 04:11:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d5ad6898a01482fb360384b72831a68e8d6f5f9a59c430a3002c0f662846d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d5ad6898a01482fb360384b72831a68e8d6f5f9a59c430a3002c0f662846d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d5ad6898a01482fb360384b72831a68e8d6f5f9a59c430a3002c0f662846d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:11.984860967 +0000 UTC m=+0.023643223 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:11:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20d5ad6898a01482fb360384b72831a68e8d6f5f9a59c430a3002c0f662846d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.090583967 +0000 UTC m=+0.129366233 container init bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.096194859 +0000 UTC m=+0.134977105 container start bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.099735705 +0000 UTC m=+0.138517961 container attach bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:12 compute-0 ceph-mon[75071]: pgmap v1004: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 13 04:11:12 compute-0 lvm[252860]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:11:12 compute-0 lvm[252861]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:11:12 compute-0 lvm[252860]: VG ceph_vg0 finished
Dec 13 04:11:12 compute-0 lvm[252861]: VG ceph_vg1 finished
Dec 13 04:11:12 compute-0 lvm[252863]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:11:12 compute-0 lvm[252863]: VG ceph_vg2 finished
Dec 13 04:11:12 compute-0 condescending_lumiere[252782]: {}
Dec 13 04:11:12 compute-0 systemd[1]: libpod-bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f.scope: Deactivated successfully.
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.895242288 +0000 UTC m=+0.934024524 container died bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:11:12 compute-0 systemd[1]: libpod-bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f.scope: Consumed 1.271s CPU time.
Dec 13 04:11:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-20d5ad6898a01482fb360384b72831a68e8d6f5f9a59c430a3002c0f662846d8-merged.mount: Deactivated successfully.
Dec 13 04:11:12 compute-0 podman[252765]: 2025-12-13 04:11:12.939789807 +0000 UTC m=+0.978572043 container remove bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_lumiere, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Dec 13 04:11:12 compute-0 systemd[1]: libpod-conmon-bb2f521f133a34ef5cdf8062e7bd7db00feaede7a5f3a887a4c28af54e70015f.scope: Deactivated successfully.
Dec 13 04:11:12 compute-0 sudo[252689]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:11:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:11:12 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:13 compute-0 sudo[252877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:11:13 compute-0 sudo[252877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:11:13 compute-0 sudo[252877]: pam_unix(sudo:session): session closed for user root
Dec 13 04:11:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 13 04:11:13 compute-0 nova_compute[243704]: 2025-12-13 04:11:13.392 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:13 compute-0 nova_compute[243704]: 2025-12-13 04:11:13.393 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:13 compute-0 nova_compute[243704]: 2025-12-13 04:11:13.394 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:13 compute-0 nova_compute[243704]: 2025-12-13 04:11:13.394 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:11:13 compute-0 nova_compute[243704]: 2025-12-13 04:11:13.395 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:11:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:13 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:11:13 compute-0 ceph-mon[75071]: pgmap v1005: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Dec 13 04:11:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770409717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2770409717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2770409717' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2770409717' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:15 compute-0 nova_compute[243704]: 2025-12-13 04:11:15.325 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599060.323291, 4deaddf6-080d-4685-aeeb-41d5dff923fd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:11:15 compute-0 nova_compute[243704]: 2025-12-13 04:11:15.325 243708 INFO nova.compute.manager [-] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] VM Stopped (Lifecycle Event)
Dec 13 04:11:15 compute-0 nova_compute[243704]: 2025-12-13 04:11:15.340 243708 DEBUG nova.compute.manager [None req-f09fb9a1-649e-4a4b-990a-585c3729df1f - - - - - -] [instance: 4deaddf6-080d-4685-aeeb-41d5dff923fd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:11:15 compute-0 nova_compute[243704]: 2025-12-13 04:11:15.350 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 13 04:11:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:16 compute-0 ceph-mon[75071]: pgmap v1006: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 114 op/s
Dec 13 04:11:16 compute-0 nova_compute[243704]: 2025-12-13 04:11:16.953 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Dec 13 04:11:18 compute-0 ceph-mon[75071]: pgmap v1007: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Dec 13 04:11:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567614312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/567614312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:18 compute-0 podman[252902]: 2025-12-13 04:11:18.955407775 +0000 UTC m=+0.096022448 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:11:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 13 04:11:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/567614312' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/567614312' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:20 compute-0 nova_compute[243704]: 2025-12-13 04:11:20.353 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:20 compute-0 ceph-mon[75071]: pgmap v1008: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 1.8 MiB/s wr, 107 op/s
Dec 13 04:11:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 510 KiB/s wr, 49 op/s
Dec 13 04:11:21 compute-0 nova_compute[243704]: 2025-12-13 04:11:21.955 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:22 compute-0 ceph-mon[75071]: pgmap v1009: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 510 KiB/s wr, 49 op/s
Dec 13 04:11:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 47 op/s
Dec 13 04:11:24 compute-0 ceph-mon[75071]: pgmap v1010: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.6 KiB/s wr, 47 op/s
Dec 13 04:11:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94560960' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/94560960' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 47 op/s
Dec 13 04:11:25 compute-0 nova_compute[243704]: 2025-12-13 04:11:25.375 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/94560960' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/94560960' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3366371104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3366371104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:26 compute-0 ceph-mon[75071]: pgmap v1011: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.7 KiB/s wr, 47 op/s
Dec 13 04:11:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3366371104' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3366371104' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:26 compute-0 nova_compute[243704]: 2025-12-13 04:11:26.956 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Dec 13 04:11:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1542399137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1542399137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:28 compute-0 ceph-mon[75071]: pgmap v1012: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 KiB/s wr, 30 op/s
Dec 13 04:11:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1542399137' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1542399137' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739142017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739142017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.1 KiB/s wr, 82 op/s
Dec 13 04:11:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1739142017' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1739142017' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:30 compute-0 nova_compute[243704]: 2025-12-13 04:11:30.377 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:30 compute-0 ceph-mon[75071]: pgmap v1013: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 3.1 KiB/s wr, 82 op/s
Dec 13 04:11:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:30 compute-0 podman[252929]: 2025-12-13 04:11:30.929358723 +0000 UTC m=+0.071156993 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:11:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Dec 13 04:11:31 compute-0 nova_compute[243704]: 2025-12-13 04:11:31.958 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:32 compute-0 ceph-mon[75071]: pgmap v1014: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.8 KiB/s wr, 56 op/s
Dec 13 04:11:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.5 KiB/s wr, 55 op/s
Dec 13 04:11:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e189 do_prune osdmap full prune enabled
Dec 13 04:11:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e190 e190: 3 total, 3 up, 3 in
Dec 13 04:11:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e190: 3 total, 3 up, 3 in
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.126 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.127 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.141 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.222 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.223 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.231 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.231 243708 INFO nova.compute.claims [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.326 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:34 compute-0 ceph-mon[75071]: pgmap v1015: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 1.5 KiB/s wr, 55 op/s
Dec 13 04:11:34 compute-0 ceph-mon[75071]: osdmap e190: 3 total, 3 up, 3 in
Dec 13 04:11:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:11:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1322979488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.897 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.904 243708 DEBUG nova.compute.provider_tree [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.927 243708 DEBUG nova.scheduler.client.report [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.947 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.724s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.948 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.991 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:11:34 compute-0 nova_compute[243704]: 2025-12-13 04:11:34.992 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.011 243708 INFO nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.027 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:11:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:35.086 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:35.087 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:35.087 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.105 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.106 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.106 243708 INFO nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Creating image(s)
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.126 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.148 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.170 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.173 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.228 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.229 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.230 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.230 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.249 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.252 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.8 KiB/s wr, 75 op/s
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.418 243708 DEBUG nova.policy [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a8b8802dc27428e82af3cfee6d31fa0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67177602579c40c98ca16df63bff5934', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.421 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e190 do_prune osdmap full prune enabled
Dec 13 04:11:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1322979488' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.513 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e191 e191: 3 total, 3 up, 3 in
Dec 13 04:11:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e191: 3 total, 3 up, 3 in
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.576 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] resizing rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.636 243708 DEBUG nova.objects.instance [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'migration_context' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.649 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.650 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Ensure instance console log exists: /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.650 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.650 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:35 compute-0 nova_compute[243704]: 2025-12-13 04:11:35.651 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e191 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:36 compute-0 nova_compute[243704]: 2025-12-13 04:11:36.206 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Successfully created port: 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:11:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e191 do_prune osdmap full prune enabled
Dec 13 04:11:36 compute-0 ceph-mon[75071]: pgmap v1017: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 55 KiB/s rd, 2.8 KiB/s wr, 75 op/s
Dec 13 04:11:36 compute-0 ceph-mon[75071]: osdmap e191: 3 total, 3 up, 3 in
Dec 13 04:11:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e192 e192: 3 total, 3 up, 3 in
Dec 13 04:11:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e192: 3 total, 3 up, 3 in
Dec 13 04:11:36 compute-0 nova_compute[243704]: 2025-12-13 04:11:36.960 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.101 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Successfully updated port: 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.119 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.120 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquired lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.120 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.193 243708 DEBUG nova.compute.manager [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.194 243708 DEBUG nova.compute.manager [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing instance network info cache due to event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:11:37 compute-0 nova_compute[243704]: 2025-12-13 04:11:37.194 243708 DEBUG oslo_concurrency.lockutils [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:11:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Dec 13 04:11:37 compute-0 ceph-mon[75071]: osdmap e192: 3 total, 3 up, 3 in
Dec 13 04:11:38 compute-0 nova_compute[243704]: 2025-12-13 04:11:38.044 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:11:38 compute-0 ceph-mon[75071]: pgmap v1020: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Dec 13 04:11:38 compute-0 podman[253138]: 2025-12-13 04:11:38.983965566 +0000 UTC m=+0.115252939 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:11:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 118 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 13 04:11:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e192 do_prune osdmap full prune enabled
Dec 13 04:11:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e193 e193: 3 total, 3 up, 3 in
Dec 13 04:11:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e193: 3 total, 3 up, 3 in
Dec 13 04:11:39 compute-0 nova_compute[243704]: 2025-12-13 04:11:39.988 243708 DEBUG nova.network.neutron [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.007 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Releasing lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.008 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Instance network_info: |[{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.009 243708 DEBUG oslo_concurrency.lockutils [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.010 243708 DEBUG nova.network.neutron [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.016 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Start _get_guest_xml network_info=[{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.025 243708 WARNING nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.038 243708 DEBUG nova.virt.libvirt.host [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.039 243708 DEBUG nova.virt.libvirt.host [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.043 243708 DEBUG nova.virt.libvirt.host [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.044 243708 DEBUG nova.virt.libvirt.host [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.044 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.045 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.046 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.046 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.047 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.047 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.048 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.048 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.049 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.049 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.050 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.050 243708 DEBUG nova.virt.hardware [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.056 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.424 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:40 compute-0 ceph-mon[75071]: pgmap v1021: 305 pgs: 305 active+clean; 118 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 2.4 MiB/s wr, 76 op/s
Dec 13 04:11:40 compute-0 ceph-mon[75071]: osdmap e193: 3 total, 3 up, 3 in
Dec 13 04:11:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:11:40
Dec 13 04:11:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:11:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:11:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data']
Dec 13 04:11:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:11:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:11:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331381324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.612 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.630 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:40 compute-0 nova_compute[243704]: 2025-12-13 04:11:40.633 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e193 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:11:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144466050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.164 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.166 243708 DEBUG nova.virt.libvirt.vif [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-260123093',display_name='tempest-TestStampPattern-server-260123093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-260123093',id=3,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-0i8c4fqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:11:35Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b050eb13-af7e-4bd1-88e6-fcb2d100ffc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.167 243708 DEBUG nova.network.os_vif_util [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.168 243708 DEBUG nova.network.os_vif_util [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.169 243708 DEBUG nova.objects.instance [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'pci_devices' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.188 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <uuid>b050eb13-af7e-4bd1-88e6-fcb2d100ffc8</uuid>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <name>instance-00000003</name>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:name>tempest-TestStampPattern-server-260123093</nova:name>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:11:40</nova:creationTime>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:user uuid="3a8b8802dc27428e82af3cfee6d31fa0">tempest-TestStampPattern-102097859-project-member</nova:user>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:project uuid="67177602579c40c98ca16df63bff5934">tempest-TestStampPattern-102097859</nova:project>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <nova:port uuid="6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b">
Dec 13 04:11:41 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <system>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="serial">b050eb13-af7e-4bd1-88e6-fcb2d100ffc8</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="uuid">b050eb13-af7e-4bd1-88e6-fcb2d100ffc8</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </system>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <os>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </os>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <features>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </features>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk">
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </source>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config">
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </source>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:11:41 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:ab:9f:6c"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <target dev="tap6c5cfc53-a7"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/console.log" append="off"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <video>
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </video>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:11:41 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:11:41 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:11:41 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:11:41 compute-0 nova_compute[243704]: </domain>
Dec 13 04:11:41 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.190 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Preparing to wait for external event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.191 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.191 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.192 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.192 243708 DEBUG nova.virt.libvirt.vif [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-260123093',display_name='tempest-TestStampPattern-server-260123093',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-260123093',id=3,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-0i8c4fqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:11:35Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b050eb13-af7e-4bd1-88e6-fcb2d100ffc8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.193 243708 DEBUG nova.network.os_vif_util [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.193 243708 DEBUG nova.network.os_vif_util [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.194 243708 DEBUG os_vif [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.195 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.195 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.196 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.200 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.200 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6c5cfc53-a7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.201 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6c5cfc53-a7, col_values=(('external_ids', {'iface-id': '6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:9f:6c', 'vm-uuid': 'b050eb13-af7e-4bd1-88e6-fcb2d100ffc8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.203 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 NetworkManager[48899]: <info>  [1765599101.2047] manager: (tap6c5cfc53-a7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.206 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.209 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.210 243708 INFO os_vif [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7')
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.255 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.256 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.256 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No VIF found with MAC fa:16:3e:ab:9f:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.257 243708 INFO nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Using config drive
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.285 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.451 243708 DEBUG nova.network.neutron [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updated VIF entry in instance network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.451 243708 DEBUG nova.network.neutron [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.481 243708 DEBUG oslo_concurrency.lockutils [req-1838459d-f776-4339-b4a6-50f2840db80c req-d5d93b66-9686-4c19-9911-96573a7ee07e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:11:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e193 do_prune osdmap full prune enabled
Dec 13 04:11:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e194 e194: 3 total, 3 up, 3 in
Dec 13 04:11:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e194: 3 total, 3 up, 3 in
Dec 13 04:11:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3331381324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3144466050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.648 243708 INFO nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Creating config drive at /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.653 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7ax2jrx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.780 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq7ax2jrx" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.808 243708 DEBUG nova.storage.rbd_utils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.812 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.919 243708 DEBUG oslo_concurrency.processutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.921 243708 INFO nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Deleting local config drive /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8/disk.config because it was imported into RBD.
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.962 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 kernel: tap6c5cfc53-a7: entered promiscuous mode
Dec 13 04:11:41 compute-0 NetworkManager[48899]: <info>  [1765599101.9786] manager: (tap6c5cfc53-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/36)
Dec 13 04:11:41 compute-0 ovn_controller[145204]: 2025-12-13T04:11:41Z|00045|binding|INFO|Claiming lport 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b for this chassis.
Dec 13 04:11:41 compute-0 ovn_controller[145204]: 2025-12-13T04:11:41Z|00046|binding|INFO|6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b: Claiming fa:16:3e:ab:9f:6c 10.100.0.7
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.979 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.983 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 nova_compute[243704]: 2025-12-13 04:11:41.987 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:41.993 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9f:6c 10.100.0.7'], port_security=['fa:16:3e:ab:9f:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b050eb13-af7e-4bd1-88e6-fcb2d100ffc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67177602579c40c98ca16df63bff5934', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf55ee30-2a30-425f-af3c-50a725a59497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15f16d90-5305-4b52-8186-db63310acee6, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:11:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:41.995 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b in datapath 6acff72d-3b46-4d95-b32d-8f79ce87caf9 bound to our chassis
Dec 13 04:11:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:41.996 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6acff72d-3b46-4d95-b32d-8f79ce87caf9
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.015 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8105a6c1-40e4-4b67-b5cf-87fed8f03c1d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.017 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap6acff72d-31 in ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:11:42 compute-0 systemd-udevd[253294]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.020 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap6acff72d-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.020 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a61bdb-af99-400e-b8b8-af2f7fdd043c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.021 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f2aa8553-0110-494c-bdff-a49b537d6d07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 systemd-machined[206767]: New machine qemu-3-instance-00000003.
Dec 13 04:11:42 compute-0 NetworkManager[48899]: <info>  [1765599102.0356] device (tap6c5cfc53-a7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.034 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[3092f586-d31c-4bae-9d9c-e532ec2e390f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 NetworkManager[48899]: <info>  [1765599102.0361] device (tap6c5cfc53-a7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:11:42 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.065 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[63d4d49d-3b61-44e5-872e-f035fb17fad9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.076 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 ovn_controller[145204]: 2025-12-13T04:11:42Z|00047|binding|INFO|Setting lport 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b ovn-installed in OVS
Dec 13 04:11:42 compute-0 ovn_controller[145204]: 2025-12-13T04:11:42Z|00048|binding|INFO|Setting lport 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b up in Southbound
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.082 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.104 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a82c46-2d08-4057-b1f4-ba35db7c44ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.109 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5697ff6c-c597-41b7-92e7-63e403e46527]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 NetworkManager[48899]: <info>  [1765599102.1112] manager: (tap6acff72d-30): new Veth device (/org/freedesktop/NetworkManager/Devices/37)
Dec 13 04:11:42 compute-0 systemd-udevd[253297]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.147 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[005b0704-aac6-4db3-a020-fa3ad7eb00d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.151 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[7acc646e-97fb-482d-8e0d-a8d2b8768924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 NetworkManager[48899]: <info>  [1765599102.1740] device (tap6acff72d-30): carrier: link connected
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.181 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[331b33ee-18e1-4a52-89e8-891238b96424]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.200 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe0be02-a059-4029-8cdc-70061f469976]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6acff72d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:c5:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384085, 'reachable_time': 26332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253326, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.220 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e9d114e4-af14-4352-966d-4f9c52a09a00]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9a:c587'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384085, 'tstamp': 384085}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253327, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.225 243708 DEBUG nova.compute.manager [req-b3347995-31e8-402a-b4bf-5834513c25b3 req-41fbc3d9-7726-4efa-a5fa-d7459d4d8d2c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.226 243708 DEBUG oslo_concurrency.lockutils [req-b3347995-31e8-402a-b4bf-5834513c25b3 req-41fbc3d9-7726-4efa-a5fa-d7459d4d8d2c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.226 243708 DEBUG oslo_concurrency.lockutils [req-b3347995-31e8-402a-b4bf-5834513c25b3 req-41fbc3d9-7726-4efa-a5fa-d7459d4d8d2c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.227 243708 DEBUG oslo_concurrency.lockutils [req-b3347995-31e8-402a-b4bf-5834513c25b3 req-41fbc3d9-7726-4efa-a5fa-d7459d4d8d2c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.227 243708 DEBUG nova.compute.manager [req-b3347995-31e8-402a-b4bf-5834513c25b3 req-41fbc3d9-7726-4efa-a5fa-d7459d4d8d2c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Processing event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.246 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdbd8d9-7259-47ef-8679-04c3ff1cfe14]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6acff72d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:c5:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384085, 'reachable_time': 26332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253328, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.279 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8e8a1591-db4b-4c3f-8bee-462f3baebd62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.338 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[849214a0-50ff-450f-9d1f-fab5fc694d9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.340 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6acff72d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.340 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.340 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6acff72d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:42 compute-0 NetworkManager[48899]: <info>  [1765599102.3450] manager: (tap6acff72d-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Dec 13 04:11:42 compute-0 kernel: tap6acff72d-30: entered promiscuous mode
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.344 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.346 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.350 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6acff72d-30, col_values=(('external_ids', {'iface-id': '09de48ad-091f-4941-8093-f0d00d05e24a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:42 compute-0 ovn_controller[145204]: 2025-12-13T04:11:42Z|00049|binding|INFO|Releasing lport 09de48ad-091f-4941-8093-f0d00d05e24a from this chassis (sb_readonly=0)
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.351 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.352 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.353 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/6acff72d-3b46-4d95-b32d-8f79ce87caf9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/6acff72d-3b46-4d95-b32d-8f79ce87caf9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.356 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3bf7a598-ba3e-4ffb-b9f5-33c20baa8f17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.357 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-6acff72d-3b46-4d95-b32d-8f79ce87caf9
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/6acff72d-3b46-4d95-b32d-8f79ce87caf9.pid.haproxy
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 6acff72d-3b46-4d95-b32d-8f79ce87caf9
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.357 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'env', 'PROCESS_TAG=haproxy-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/6acff72d-3b46-4d95-b32d-8f79ce87caf9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.368 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.433 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.434 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599102.4332924, b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.434 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] VM Started (Lifecycle Event)
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.439 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.442 243708 INFO nova.virt.libvirt.driver [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Instance spawned successfully.
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.442 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.455 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.464 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.468 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.468 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.469 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.469 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.470 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.470 243708 DEBUG nova.virt.libvirt.driver [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.488 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.489 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599102.4342816, b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.489 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] VM Paused (Lifecycle Event)
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.523 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.527 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599102.4389625, b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.527 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] VM Resumed (Lifecycle Event)
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.535 243708 INFO nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Took 7.43 seconds to spawn the instance on the hypervisor.
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.536 243708 DEBUG nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.560 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.563 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:11:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e194 do_prune osdmap full prune enabled
Dec 13 04:11:42 compute-0 ceph-mon[75071]: pgmap v1023: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Dec 13 04:11:42 compute-0 ceph-mon[75071]: osdmap e194: 3 total, 3 up, 3 in
Dec 13 04:11:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e195 e195: 3 total, 3 up, 3 in
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.776 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:11:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:42.778 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.784 243708 INFO nova.compute.manager [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Took 8.59 seconds to build instance.
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.786 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e195: 3 total, 3 up, 3 in
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:11:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:11:42 compute-0 nova_compute[243704]: 2025-12-13 04:11:42.811 243708 DEBUG oslo_concurrency.lockutils [None req-b9fc9f06-e5a1-446d-bd80-8be1f4dfd00d 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:42 compute-0 podman[253399]: 2025-12-13 04:11:42.986401437 +0000 UTC m=+0.059857396 container create 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:11:43 compute-0 systemd[1]: Started libpod-conmon-13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf.scope.
Dec 13 04:11:43 compute-0 podman[253399]: 2025-12-13 04:11:42.956789753 +0000 UTC m=+0.030245732 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:11:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:11:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe75d083890ef5444a8a94978e800418f4758366f843964c01f2bd08958cd50c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:11:43 compute-0 podman[253399]: 2025-12-13 04:11:43.08116994 +0000 UTC m=+0.154625899 container init 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:11:43 compute-0 podman[253399]: 2025-12-13 04:11:43.088138448 +0000 UTC m=+0.161594417 container start 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 04:11:43 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [NOTICE]   (253419) : New worker (253421) forked
Dec 13 04:11:43 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [NOTICE]   (253419) : Loading success.
Dec 13 04:11:43 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:43.155 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:11:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Dec 13 04:11:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085926922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1085926922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e195 do_prune osdmap full prune enabled
Dec 13 04:11:43 compute-0 ceph-mon[75071]: osdmap e195: 3 total, 3 up, 3 in
Dec 13 04:11:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1085926922' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1085926922' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e196 e196: 3 total, 3 up, 3 in
Dec 13 04:11:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e196: 3 total, 3 up, 3 in
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.360 243708 DEBUG nova.compute.manager [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.361 243708 DEBUG oslo_concurrency.lockutils [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.361 243708 DEBUG oslo_concurrency.lockutils [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.362 243708 DEBUG oslo_concurrency.lockutils [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.362 243708 DEBUG nova.compute.manager [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] No waiting events found dispatching network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:11:44 compute-0 nova_compute[243704]: 2025-12-13 04:11:44.362 243708 WARNING nova.compute.manager [req-9e21c061-f55b-4ed3-9cc8-fba4b943821c req-f56a5150-e6ff-4c54-ad03-ee8bd66019ee 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received unexpected event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b for instance with vm_state active and task_state None.
Dec 13 04:11:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e196 do_prune osdmap full prune enabled
Dec 13 04:11:44 compute-0 ceph-mon[75071]: pgmap v1026: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.5 MiB/s wr, 87 op/s
Dec 13 04:11:44 compute-0 ceph-mon[75071]: osdmap e196: 3 total, 3 up, 3 in
Dec 13 04:11:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e197 e197: 3 total, 3 up, 3 in
Dec 13 04:11:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e197: 3 total, 3 up, 3 in
Dec 13 04:11:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 45 KiB/s wr, 377 op/s
Dec 13 04:11:45 compute-0 ceph-mon[75071]: osdmap e197: 3 total, 3 up, 3 in
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.629418) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105629486, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1212, "num_deletes": 253, "total_data_size": 1570572, "memory_usage": 1599088, "flush_reason": "Manual Compaction"}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105643598, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1552033, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20275, "largest_seqno": 21486, "table_properties": {"data_size": 1546213, "index_size": 3087, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13290, "raw_average_key_size": 20, "raw_value_size": 1534180, "raw_average_value_size": 2356, "num_data_blocks": 138, "num_entries": 651, "num_filter_entries": 651, "num_deletions": 253, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599021, "oldest_key_time": 1765599021, "file_creation_time": 1765599105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14216 microseconds, and 8839 cpu microseconds.
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.643635) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1552033 bytes OK
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.643656) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.644847) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.644864) EVENT_LOG_v1 {"time_micros": 1765599105644860, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.644886) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1564907, prev total WAL file size 1564907, number of live WAL files 2.
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.645544) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1515KB)], [47(7880KB)]
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105645634, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9622073, "oldest_snapshot_seqno": -1}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739767382' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1739767382' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4621 keys, 7879092 bytes, temperature: kUnknown
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105697250, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7879092, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7846264, "index_size": 20174, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11589, "raw_key_size": 114657, "raw_average_key_size": 24, "raw_value_size": 7760864, "raw_average_value_size": 1679, "num_data_blocks": 835, "num_entries": 4621, "num_filter_entries": 4621, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599105, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.697576) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7879092 bytes
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.699593) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 186.0 rd, 152.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.7 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(11.3) write-amplify(5.1) OK, records in: 5142, records dropped: 521 output_compression: NoCompression
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.699621) EVENT_LOG_v1 {"time_micros": 1765599105699612, "job": 24, "event": "compaction_finished", "compaction_time_micros": 51735, "compaction_time_cpu_micros": 21759, "output_level": 6, "num_output_files": 1, "total_output_size": 7879092, "num_input_records": 5142, "num_output_records": 4621, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105699957, "job": 24, "event": "table_file_deletion", "file_number": 49}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599105701246, "job": 24, "event": "table_file_deletion", "file_number": 47}
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.645395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.701415) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.701424) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.701426) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.701427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:11:45.701429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:11:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e197 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e197 do_prune osdmap full prune enabled
Dec 13 04:11:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e198 e198: 3 total, 3 up, 3 in
Dec 13 04:11:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e198: 3 total, 3 up, 3 in
Dec 13 04:11:46 compute-0 nova_compute[243704]: 2025-12-13 04:11:46.204 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909029281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2909029281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 ceph-mon[75071]: pgmap v1029: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 45 KiB/s wr, 377 op/s
Dec 13 04:11:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1739767382' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1739767382' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 ceph-mon[75071]: osdmap e198: 3 total, 3 up, 3 in
Dec 13 04:11:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2909029281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2909029281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:46 compute-0 nova_compute[243704]: 2025-12-13 04:11:46.938 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:46 compute-0 NetworkManager[48899]: <info>  [1765599106.9398] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Dec 13 04:11:46 compute-0 NetworkManager[48899]: <info>  [1765599106.9403] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.051 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.053 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:47 compute-0 ovn_controller[145204]: 2025-12-13T04:11:47Z|00050|binding|INFO|Releasing lport 09de48ad-091f-4941-8093-f0d00d05e24a from this chassis (sb_readonly=0)
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.064 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.328 243708 DEBUG nova.compute.manager [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.329 243708 DEBUG nova.compute.manager [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing instance network info cache due to event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.329 243708 DEBUG oslo_concurrency.lockutils [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.329 243708 DEBUG oslo_concurrency.lockutils [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:11:47 compute-0 nova_compute[243704]: 2025-12-13 04:11:47.330 243708 DEBUG nova.network.neutron [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:11:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 37 KiB/s wr, 314 op/s
Dec 13 04:11:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:11:48.158 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:11:48 compute-0 ceph-mon[75071]: pgmap v1031: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 37 KiB/s wr, 314 op/s
Dec 13 04:11:49 compute-0 nova_compute[243704]: 2025-12-13 04:11:49.113 243708 DEBUG nova.network.neutron [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updated VIF entry in instance network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:11:49 compute-0 nova_compute[243704]: 2025-12-13 04:11:49.114 243708 DEBUG nova.network.neutron [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:11:49 compute-0 nova_compute[243704]: 2025-12-13 04:11:49.130 243708 DEBUG oslo_concurrency.lockutils [req-c040fa0a-45b2-4e14-9131-7fa5e978e687 req-424e7f7c-0606-44bd-b3f6-ede215fd1d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:11:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 296 op/s
Dec 13 04:11:49 compute-0 podman[253432]: 2025-12-13 04:11:49.935950205 +0000 UTC m=+0.084973248 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:11:50 compute-0 ceph-mon[75071]: pgmap v1032: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 31 KiB/s wr, 296 op/s
Dec 13 04:11:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e198 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:51 compute-0 nova_compute[243704]: 2025-12-13 04:11:51.207 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 228 op/s
Dec 13 04:11:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e198 do_prune osdmap full prune enabled
Dec 13 04:11:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e199 e199: 3 total, 3 up, 3 in
Dec 13 04:11:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e199: 3 total, 3 up, 3 in
Dec 13 04:11:52 compute-0 nova_compute[243704]: 2025-12-13 04:11:52.055 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003487933556875338 of space, bias 1.0, pg target 0.10463800670626014 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035112557905201736 of space, bias 1.0, pg target 0.1053376737156052 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.514160579525649e-07 of space, bias 1.0, pg target 4.5424817385769466e-05 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006658722275812628 of space, bias 1.0, pg target 0.19976166827437886 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.198737627672654e-06 of space, bias 4.0, pg target 0.0014384851532071848 quantized to 16 (current 16)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:11:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:11:52 compute-0 ceph-mon[75071]: pgmap v1033: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 24 KiB/s wr, 228 op/s
Dec 13 04:11:52 compute-0 ceph-mon[75071]: osdmap e199: 3 total, 3 up, 3 in
Dec 13 04:11:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 33 op/s
Dec 13 04:11:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e199 do_prune osdmap full prune enabled
Dec 13 04:11:53 compute-0 ceph-mon[75071]: pgmap v1035: 305 pgs: 305 active+clean; 134 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 33 op/s
Dec 13 04:11:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e200 e200: 3 total, 3 up, 3 in
Dec 13 04:11:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e200: 3 total, 3 up, 3 in
Dec 13 04:11:54 compute-0 ceph-mon[75071]: osdmap e200: 3 total, 3 up, 3 in
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931677850' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3931677850' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 144 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.3 MiB/s wr, 87 op/s
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:11:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1508860433' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:11:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1508860433' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e200 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e200 do_prune osdmap full prune enabled
Dec 13 04:11:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e201 e201: 3 total, 3 up, 3 in
Dec 13 04:11:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e201: 3 total, 3 up, 3 in
Dec 13 04:11:55 compute-0 ovn_controller[145204]: 2025-12-13T04:11:55Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:9f:6c 10.100.0.7
Dec 13 04:11:55 compute-0 ovn_controller[145204]: 2025-12-13T04:11:55Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:9f:6c 10.100.0.7
Dec 13 04:11:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3931677850' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3931677850' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:56 compute-0 ceph-mon[75071]: pgmap v1037: 305 pgs: 305 active+clean; 144 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 127 KiB/s rd, 1.3 MiB/s wr, 87 op/s
Dec 13 04:11:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1508860433' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:11:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1508860433' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:11:56 compute-0 ceph-mon[75071]: osdmap e201: 3 total, 3 up, 3 in
Dec 13 04:11:56 compute-0 nova_compute[243704]: 2025-12-13 04:11:56.211 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:57 compute-0 nova_compute[243704]: 2025-12-13 04:11:57.058 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:11:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 144 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec 13 04:11:58 compute-0 ceph-mon[75071]: pgmap v1039: 305 pgs: 305 active+clean; 144 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Dec 13 04:11:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 162 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 3.2 MiB/s wr, 148 op/s
Dec 13 04:12:00 compute-0 ceph-mon[75071]: pgmap v1040: 305 pgs: 305 active+clean; 162 MiB data, 295 MiB used, 60 GiB / 60 GiB avail; 483 KiB/s rd, 3.2 MiB/s wr, 148 op/s
Dec 13 04:12:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e201 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e201 do_prune osdmap full prune enabled
Dec 13 04:12:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e202 e202: 3 total, 3 up, 3 in
Dec 13 04:12:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e202: 3 total, 3 up, 3 in
Dec 13 04:12:01 compute-0 nova_compute[243704]: 2025-12-13 04:12:01.215 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 2.7 MiB/s wr, 138 op/s
Dec 13 04:12:01 compute-0 ceph-mon[75071]: osdmap e202: 3 total, 3 up, 3 in
Dec 13 04:12:01 compute-0 ceph-mon[75071]: pgmap v1042: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 519 KiB/s rd, 2.7 MiB/s wr, 138 op/s
Dec 13 04:12:01 compute-0 podman[253456]: 2025-12-13 04:12:01.905214747 +0000 UTC m=+0.050470991 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 04:12:02 compute-0 nova_compute[243704]: 2025-12-13 04:12:02.059 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:02 compute-0 nova_compute[243704]: 2025-12-13 04:12:02.182 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec 13 04:12:03 compute-0 nova_compute[243704]: 2025-12-13 04:12:03.614 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:03 compute-0 nova_compute[243704]: 2025-12-13 04:12:03.615 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:03 compute-0 nova_compute[243704]: 2025-12-13 04:12:03.653 243708 DEBUG nova.objects.instance [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.084 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:04 compute-0 ceph-mon[75071]: pgmap v1043: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 416 KiB/s rd, 1.9 MiB/s wr, 107 op/s
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.533 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.534 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.534 243708 INFO nova.compute.manager [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Attaching volume a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc to /dev/vdb
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.779 243708 DEBUG os_brick.utils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.783 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.795 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.795 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b35de58a-f424-4cbf-a610-3074e0db3ffe]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.796 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.803 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.804 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5f75c6-e30b-4d53-8ad0-4275bbae15b1]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.805 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.811 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.812 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b7bbe1b4-01c2-43a5-a2d5-d1790e19b967]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.813 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[01d2148c-a5e0-4772-a78a-b202fc6e79ba]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.813 243708 DEBUG oslo_concurrency.processutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.834 243708 DEBUG oslo_concurrency.processutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.836 243708 DEBUG os_brick.initiator.connectors.lightos [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.836 243708 DEBUG os_brick.initiator.connectors.lightos [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.836 243708 DEBUG os_brick.initiator.connectors.lightos [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.837 243708 DEBUG os_brick.utils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] <== get_connector_properties: return (56ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:12:04 compute-0 nova_compute[243704]: 2025-12-13 04:12:04.837 243708 DEBUG nova.virt.block_device [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating existing volume attachment record: 21e19189-b292-4822-802a-bdf22150925f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:12:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 1.6 MiB/s wr, 90 op/s
Dec 13 04:12:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/832464894' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.734 243708 DEBUG nova.objects.instance [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.760 243708 DEBUG nova.virt.libvirt.driver [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Attempting to attach volume a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.763 243708 DEBUG nova.virt.libvirt.guest [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:12:05 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc">
Dec 13 04:12:05 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:12:05 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:05 compute-0 nova_compute[243704]:   <serial>a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc</serial>
Dec 13 04:12:05 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:05 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.880 243708 DEBUG nova.virt.libvirt.driver [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.881 243708 DEBUG nova.virt.libvirt.driver [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.881 243708 DEBUG nova.virt.libvirt.driver [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:05 compute-0 nova_compute[243704]: 2025-12-13 04:12:05.881 243708 DEBUG nova.virt.libvirt.driver [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No VIF found with MAC fa:16:3e:ab:9f:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:12:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:06 compute-0 nova_compute[243704]: 2025-12-13 04:12:06.197 243708 DEBUG oslo_concurrency.lockutils [None req-d419a66f-9638-45a1-980e-bb6c219fecb1 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:06 compute-0 nova_compute[243704]: 2025-12-13 04:12:06.218 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:06 compute-0 ceph-mon[75071]: pgmap v1044: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 350 KiB/s rd, 1.6 MiB/s wr, 90 op/s
Dec 13 04:12:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/832464894' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:07 compute-0 nova_compute[243704]: 2025-12-13 04:12:07.061 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:07 compute-0 nova_compute[243704]: 2025-12-13 04:12:07.089 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 1.5 MiB/s wr, 86 op/s
Dec 13 04:12:07 compute-0 nova_compute[243704]: 2025-12-13 04:12:07.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:07 compute-0 nova_compute[243704]: 2025-12-13 04:12:07.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:12:08 compute-0 ceph-mon[75071]: pgmap v1045: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 1.5 MiB/s wr, 86 op/s
Dec 13 04:12:08 compute-0 nova_compute[243704]: 2025-12-13 04:12:08.915 243708 DEBUG oslo_concurrency.lockutils [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:08 compute-0 nova_compute[243704]: 2025-12-13 04:12:08.915 243708 DEBUG oslo_concurrency.lockutils [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:08 compute-0 nova_compute[243704]: 2025-12-13 04:12:08.930 243708 INFO nova.compute.manager [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Detaching volume a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc
Dec 13 04:12:08 compute-0 nova_compute[243704]: 2025-12-13 04:12:08.989 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.093 243708 INFO nova.virt.block_device [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Attempting to driver detach volume a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc from mountpoint /dev/vdb
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.101 243708 DEBUG nova.virt.libvirt.driver [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Attempting to detach device vdb from instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.102 243708 DEBUG nova.virt.libvirt.guest [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc">
Dec 13 04:12:09 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <serial>a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc</serial>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:09 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.109 243708 INFO nova.virt.libvirt.driver [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully detached device vdb from instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 from the persistent domain config.
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.109 243708 DEBUG nova.virt.libvirt.driver [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.109 243708 DEBUG nova.virt.libvirt.guest [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc">
Dec 13 04:12:09 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <serial>a8a5e63c-66ad-4f30-b6d2-92ffd3121ccc</serial>
Dec 13 04:12:09 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:12:09 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:09 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.163 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599129.1632535, b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.165 243708 DEBUG nova.virt.libvirt.driver [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.167 243708 INFO nova.virt.libvirt.driver [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully detached device vdb from instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 from the live domain config.
Dec 13 04:12:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 101 KiB/s wr, 15 op/s
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.558 243708 DEBUG nova.objects.instance [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.585 243708 DEBUG oslo_concurrency.lockutils [None req-7d205c8b-4e82-4e20-a904-a4c3ec4a35bd 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.900 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.903 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.903 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:12:09 compute-0 nova_compute[243704]: 2025-12-13 04:12:09.904 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:09 compute-0 podman[253504]: 2025-12-13 04:12:09.946765443 +0000 UTC m=+0.086589872 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:12:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961005095' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:10 compute-0 ceph-mon[75071]: pgmap v1046: 305 pgs: 305 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 43 KiB/s rd, 101 KiB/s wr, 15 op/s
Dec 13 04:12:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1961005095' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.479 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.537 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.537 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000003 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.705 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.706 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4482MB free_disk=59.94272615201771GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.706 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.706 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e202 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.969 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.970 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:12:10 compute-0 nova_compute[243704]: 2025-12-13 04:12:10.970 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.142 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.220 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 169 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 236 KiB/s wr, 11 op/s
Dec 13 04:12:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e202 do_prune osdmap full prune enabled
Dec 13 04:12:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e203 e203: 3 total, 3 up, 3 in
Dec 13 04:12:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e203: 3 total, 3 up, 3 in
Dec 13 04:12:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/119533806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.658 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.665 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.679 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.712 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.712 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.714 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.714 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:12:11 compute-0 nova_compute[243704]: 2025-12-13 04:12:11.728 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.063 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.468 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.483 243708 DEBUG nova.compute.manager [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:12 compute-0 ceph-mon[75071]: pgmap v1047: 305 pgs: 305 active+clean; 169 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.2 KiB/s rd, 236 KiB/s wr, 11 op/s
Dec 13 04:12:12 compute-0 ceph-mon[75071]: osdmap e203: 3 total, 3 up, 3 in
Dec 13 04:12:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/119533806' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.539 243708 INFO nova.compute.manager [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] instance snapshotting
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.729 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.729 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.729 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.747 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.747 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.747 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.747 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.785 243708 INFO nova.virt.libvirt.driver [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Beginning live snapshot process
Dec 13 04:12:12 compute-0 nova_compute[243704]: 2025-12-13 04:12:12.938 243708 DEBUG nova.virt.libvirt.imagebackend [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No parent info for 36cf6469-9e96-4186-bf30-37c785f25db6; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163
Dec 13 04:12:13 compute-0 sudo[253603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:12:13 compute-0 sudo[253603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:13 compute-0 sudo[253603]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:13 compute-0 nova_compute[243704]: 2025-12-13 04:12:13.154 243708 DEBUG nova.storage.rbd_utils [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] creating snapshot(9f806c5ed28143f9a0ba74ef41ffb88d) on rbd image(b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 04:12:13 compute-0 sudo[253628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:12:13 compute-0 sudo[253628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 169 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 248 KiB/s wr, 11 op/s
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Dec 13 04:12:13 compute-0 nova_compute[243704]: 2025-12-13 04:12:13.532 243708 DEBUG nova.storage.rbd_utils [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] cloning vms/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk@9f806c5ed28143f9a0ba74ef41ffb88d to images/89bd4ded-0d1d-43c0-8889-725d21f3df99 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 04:12:13 compute-0 nova_compute[243704]: 2025-12-13 04:12:13.633 243708 DEBUG nova.storage.rbd_utils [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] flattening images/89bd4ded-0d1d-43c0-8889-725d21f3df99 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314
Dec 13 04:12:13 compute-0 sudo[253628]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:12:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:12:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:13 compute-0 sudo[253756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:12:13 compute-0 sudo[253756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:13 compute-0 sudo[253756]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:14 compute-0 nova_compute[243704]: 2025-12-13 04:12:14.015 243708 DEBUG nova.storage.rbd_utils [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] removing snapshot(9f806c5ed28143f9a0ba74ef41ffb88d) on rbd image(b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489
Dec 13 04:12:14 compute-0 sudo[253781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:12:14 compute-0 sudo[253781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.310786919 +0000 UTC m=+0.049478814 container create dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:12:14 compute-0 systemd[1]: Started libpod-conmon-dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560.scope.
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.287440365 +0000 UTC m=+0.026132270 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.402702564 +0000 UTC m=+0.141394479 container init dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.413546158 +0000 UTC m=+0.152238063 container start dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.417223678 +0000 UTC m=+0.155915703 container attach dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:12:14 compute-0 mystifying_blackburn[253851]: 167 167
Dec 13 04:12:14 compute-0 systemd[1]: libpod-dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560.scope: Deactivated successfully.
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.421391371 +0000 UTC m=+0.160083266 container died dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:12:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2388418ac74c34322bfea0a4530574c2eddb81ef19196502f74c48b76a0ea57d-merged.mount: Deactivated successfully.
Dec 13 04:12:14 compute-0 podman[253835]: 2025-12-13 04:12:14.464626524 +0000 UTC m=+0.203318409 container remove dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_blackburn, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:14 compute-0 systemd[1]: libpod-conmon-dd5a29eb63d4ddc92c428816b1a1534fdd992a2aa059c93b7f9053bc33e75560.scope: Deactivated successfully.
Dec 13 04:12:14 compute-0 ceph-mon[75071]: pgmap v1049: 305 pgs: 305 active+clean; 169 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 7.6 KiB/s rd, 248 KiB/s wr, 11 op/s
Dec 13 04:12:14 compute-0 ceph-mon[75071]: osdmap e204: 3 total, 3 up, 3 in
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:12:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:12:14 compute-0 podman[253874]: 2025-12-13 04:12:14.656802611 +0000 UTC m=+0.041994911 container create 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:14 compute-0 systemd[1]: Started libpod-conmon-722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509.scope.
Dec 13 04:12:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:14 compute-0 podman[253874]: 2025-12-13 04:12:14.639687137 +0000 UTC m=+0.024879447 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:14 compute-0 podman[253874]: 2025-12-13 04:12:14.769973142 +0000 UTC m=+0.155165452 container init 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:14 compute-0 podman[253874]: 2025-12-13 04:12:14.780744385 +0000 UTC m=+0.165936685 container start 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:12:14 compute-0 podman[253874]: 2025-12-13 04:12:14.784117407 +0000 UTC m=+0.169309727 container attach 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Dec 13 04:12:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Dec 13 04:12:14 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Dec 13 04:12:14 compute-0 nova_compute[243704]: 2025-12-13 04:12:14.939 243708 DEBUG nova.storage.rbd_utils [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] creating snapshot(snap) on rbd image(89bd4ded-0d1d-43c0-8889-725d21f3df99) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462
Dec 13 04:12:15 compute-0 serene_gates[253890]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:12:15 compute-0 serene_gates[253890]: --> All data devices are unavailable
Dec 13 04:12:15 compute-0 systemd[1]: libpod-722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509.scope: Deactivated successfully.
Dec 13 04:12:15 compute-0 podman[253874]: 2025-12-13 04:12:15.36114124 +0000 UTC m=+0.746333580 container died 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-dd7da85c288e3f89b2ccb52fb278e861319805a67076f2893f780e0336e01fbe-merged.mount: Deactivated successfully.
Dec 13 04:12:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1052: 305 pgs: 305 active+clean; 196 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.4 MiB/s wr, 99 op/s
Dec 13 04:12:15 compute-0 podman[253874]: 2025-12-13 04:12:15.402979785 +0000 UTC m=+0.788172075 container remove 722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_gates, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:15 compute-0 systemd[1]: libpod-conmon-722dd96dfd671e804f6358455720c8fcf11f3fd36695150b7b38124dafb93509.scope: Deactivated successfully.
Dec 13 04:12:15 compute-0 sudo[253781]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.482 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.500 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.500 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.500 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.501 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.501 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.501 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.501 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.501 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:12:15 compute-0 nova_compute[243704]: 2025-12-13 04:12:15.502 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:15 compute-0 sudo[253940]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:12:15 compute-0 sudo[253940]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:15 compute-0 sudo[253940]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:15 compute-0 sudo[253965]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:12:15 compute-0 sudo[253965]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.867434162 +0000 UTC m=+0.034137947 container create 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:12:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e205 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:15 compute-0 systemd[1]: Started libpod-conmon-837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6.scope.
Dec 13 04:12:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Dec 13 04:12:15 compute-0 ceph-mon[75071]: osdmap e205: 3 total, 3 up, 3 in
Dec 13 04:12:15 compute-0 ceph-mon[75071]: pgmap v1052: 305 pgs: 305 active+clean; 196 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 3.4 MiB/s wr, 99 op/s
Dec 13 04:12:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Dec 13 04:12:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Dec 13 04:12:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.85335385 +0000 UTC m=+0.020057655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.959651765 +0000 UTC m=+0.126355590 container init 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.966071939 +0000 UTC m=+0.132775744 container start 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:15 compute-0 hardcore_archimedes[254019]: 167 167
Dec 13 04:12:15 compute-0 systemd[1]: libpod-837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6.scope: Deactivated successfully.
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.971638541 +0000 UTC m=+0.138342396 container attach 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:12:15 compute-0 podman[254002]: 2025-12-13 04:12:15.972817283 +0000 UTC m=+0.139521108 container died 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:12:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-ea487d8c0d5a62157c663bb094c0e7ff357c898c68bb0e16a27ff8ffe0dc166a-merged.mount: Deactivated successfully.
Dec 13 04:12:16 compute-0 podman[254002]: 2025-12-13 04:12:16.01028616 +0000 UTC m=+0.176989955 container remove 837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_archimedes, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:12:16 compute-0 systemd[1]: libpod-conmon-837d0089087d3c0a02fa40f9a938a02b1d01da8ae805b6fbea28878ae98df2c6.scope: Deactivated successfully.
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.177832678 +0000 UTC m=+0.038372343 container create 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:16 compute-0 systemd[1]: Started libpod-conmon-2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f.scope.
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.161443303 +0000 UTC m=+0.021982988 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:16 compute-0 nova_compute[243704]: 2025-12-13 04:12:16.274 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b762fd3272c3ea9070e6812185515944d873af6aa2faa1e9ac96fba8935d8ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b762fd3272c3ea9070e6812185515944d873af6aa2faa1e9ac96fba8935d8ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b762fd3272c3ea9070e6812185515944d873af6aa2faa1e9ac96fba8935d8ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b762fd3272c3ea9070e6812185515944d873af6aa2faa1e9ac96fba8935d8ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.29656462 +0000 UTC m=+0.157104305 container init 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.304538277 +0000 UTC m=+0.165077942 container start 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.307386784 +0000 UTC m=+0.167926459 container attach 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:12:16 compute-0 goofy_bassi[254058]: {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     "0": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "devices": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "/dev/loop3"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             ],
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_name": "ceph_lv0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_size": "21470642176",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "name": "ceph_lv0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "tags": {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_name": "ceph",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.crush_device_class": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.encrypted": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.objectstore": "bluestore",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_id": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.vdo": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.with_tpm": "0"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             },
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "vg_name": "ceph_vg0"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         }
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     ],
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     "1": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "devices": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "/dev/loop4"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             ],
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_name": "ceph_lv1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_size": "21470642176",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "name": "ceph_lv1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "tags": {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_name": "ceph",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.crush_device_class": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.encrypted": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.objectstore": "bluestore",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_id": "1",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.vdo": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.with_tpm": "0"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             },
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "vg_name": "ceph_vg1"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         }
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     ],
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     "2": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "devices": [
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "/dev/loop5"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             ],
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_name": "ceph_lv2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_size": "21470642176",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "name": "ceph_lv2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "tags": {
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.cluster_name": "ceph",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.crush_device_class": "",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.encrypted": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.objectstore": "bluestore",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osd_id": "2",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.vdo": "0",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:                 "ceph.with_tpm": "0"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             },
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "type": "block",
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:             "vg_name": "ceph_vg2"
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:         }
Dec 13 04:12:16 compute-0 goofy_bassi[254058]:     ]
Dec 13 04:12:16 compute-0 goofy_bassi[254058]: }
Dec 13 04:12:16 compute-0 systemd[1]: libpod-2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f.scope: Deactivated successfully.
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.616701571 +0000 UTC m=+0.477241236 container died 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:12:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-9b762fd3272c3ea9070e6812185515944d873af6aa2faa1e9ac96fba8935d8ae-merged.mount: Deactivated successfully.
Dec 13 04:12:16 compute-0 podman[254042]: 2025-12-13 04:12:16.665873925 +0000 UTC m=+0.526413590 container remove 2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_bassi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:12:16 compute-0 systemd[1]: libpod-conmon-2229b13ddf42b4dd19e9b3dd72a7131c116b52732c7122414867aa812ca23f5f.scope: Deactivated successfully.
Dec 13 04:12:16 compute-0 sudo[253965]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:16 compute-0 sudo[254078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:12:16 compute-0 sudo[254078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:16 compute-0 sudo[254078]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:16 compute-0 sudo[254103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:12:16 compute-0 sudo[254103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:16 compute-0 ceph-mon[75071]: osdmap e206: 3 total, 3 up, 3 in
Dec 13 04:12:17 compute-0 nova_compute[243704]: 2025-12-13 04:12:17.065 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.158397984 +0000 UTC m=+0.042716230 container create 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:12:17 compute-0 systemd[1]: Started libpod-conmon-4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9.scope.
Dec 13 04:12:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.226130673 +0000 UTC m=+0.110448939 container init 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.235015814 +0000 UTC m=+0.119334080 container start 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.238520089 +0000 UTC m=+0.122838365 container attach 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.14350702 +0000 UTC m=+0.027825286 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:17 compute-0 condescending_visvesvaraya[254158]: 167 167
Dec 13 04:12:17 compute-0 systemd[1]: libpod-4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9.scope: Deactivated successfully.
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.240494542 +0000 UTC m=+0.124812798 container died 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:12:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-908198ac93e0e85cdc74ce3468f5004be800158eb3a4b2be98cf6f8173596619-merged.mount: Deactivated successfully.
Dec 13 04:12:17 compute-0 nova_compute[243704]: 2025-12-13 04:12:17.265 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:17 compute-0 podman[254141]: 2025-12-13 04:12:17.277924609 +0000 UTC m=+0.162242855 container remove 4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_visvesvaraya, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:12:17 compute-0 systemd[1]: libpod-conmon-4f4e9379f85e4feb2445c235ed8f97fa8807d69f5eff00280fb26fcdaa6360b9.scope: Deactivated successfully.
Dec 13 04:12:17 compute-0 nova_compute[243704]: 2025-12-13 04:12:17.292 243708 INFO nova.virt.libvirt.driver [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Snapshot image upload complete
Dec 13 04:12:17 compute-0 nova_compute[243704]: 2025-12-13 04:12:17.293 243708 INFO nova.compute.manager [None req-02de5fab-81bd-4baf-9b60-6a1145f18d0c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Took 4.75 seconds to snapshot the instance on the hypervisor.
Dec 13 04:12:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 196 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Dec 13 04:12:17 compute-0 podman[254181]: 2025-12-13 04:12:17.439402981 +0000 UTC m=+0.039728029 container create 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:12:17 compute-0 systemd[1]: Started libpod-conmon-021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721.scope.
Dec 13 04:12:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465249db9a854d917dfd168f5bacfe0e9ade20c56330e26754c3dfc5fe92c6bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465249db9a854d917dfd168f5bacfe0e9ade20c56330e26754c3dfc5fe92c6bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465249db9a854d917dfd168f5bacfe0e9ade20c56330e26754c3dfc5fe92c6bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/465249db9a854d917dfd168f5bacfe0e9ade20c56330e26754c3dfc5fe92c6bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:17 compute-0 podman[254181]: 2025-12-13 04:12:17.424119397 +0000 UTC m=+0.024444465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:12:17 compute-0 podman[254181]: 2025-12-13 04:12:17.531015238 +0000 UTC m=+0.131340306 container init 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:12:17 compute-0 podman[254181]: 2025-12-13 04:12:17.537186826 +0000 UTC m=+0.137511884 container start 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:17 compute-0 podman[254181]: 2025-12-13 04:12:17.541437392 +0000 UTC m=+0.141762560 container attach 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:12:17 compute-0 ceph-mon[75071]: pgmap v1054: 305 pgs: 305 active+clean; 196 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.0 MiB/s wr, 84 op/s
Dec 13 04:12:18 compute-0 nova_compute[243704]: 2025-12-13 04:12:18.116 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:18 compute-0 lvm[254276]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:12:18 compute-0 lvm[254276]: VG ceph_vg0 finished
Dec 13 04:12:18 compute-0 lvm[254277]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:12:18 compute-0 lvm[254277]: VG ceph_vg1 finished
Dec 13 04:12:18 compute-0 lvm[254279]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:12:18 compute-0 lvm[254279]: VG ceph_vg2 finished
Dec 13 04:12:18 compute-0 lvm[254280]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:12:18 compute-0 lvm[254280]: VG ceph_vg0 finished
Dec 13 04:12:18 compute-0 distracted_mestorf[254198]: {}
Dec 13 04:12:18 compute-0 systemd[1]: libpod-021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721.scope: Deactivated successfully.
Dec 13 04:12:18 compute-0 podman[254181]: 2025-12-13 04:12:18.399711958 +0000 UTC m=+1.000037026 container died 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Dec 13 04:12:18 compute-0 systemd[1]: libpod-021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721.scope: Consumed 1.331s CPU time.
Dec 13 04:12:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-465249db9a854d917dfd168f5bacfe0e9ade20c56330e26754c3dfc5fe92c6bf-merged.mount: Deactivated successfully.
Dec 13 04:12:18 compute-0 podman[254181]: 2025-12-13 04:12:18.570963386 +0000 UTC m=+1.171288444 container remove 021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:12:18 compute-0 systemd[1]: libpod-conmon-021ef58492ba0526f7db1629992efa0b3c6eb85ea1ce3a9ff934679ba9004721.scope: Deactivated successfully.
Dec 13 04:12:18 compute-0 sudo[254103]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:12:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:12:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:18 compute-0 sudo[254296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:12:18 compute-0 sudo[254296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:12:18 compute-0 sudo[254296]: pam_unix(sudo:session): session closed for user root
Dec 13 04:12:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 248 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 205 op/s
Dec 13 04:12:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:12:20 compute-0 ceph-mon[75071]: pgmap v1055: 305 pgs: 305 active+clean; 248 MiB data, 333 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 7.8 MiB/s wr, 205 op/s
Dec 13 04:12:20 compute-0 nova_compute[243704]: 2025-12-13 04:12:20.653 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:20 compute-0 podman[254321]: 2025-12-13 04:12:20.985272009 +0000 UTC m=+0.109645417 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 13 04:12:21 compute-0 nova_compute[243704]: 2025-12-13 04:12:21.275 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 165 op/s
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.067 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.556 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.557 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.577 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.652 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.653 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:22 compute-0 ceph-mon[75071]: pgmap v1056: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 165 op/s
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.662 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.662 243708 INFO nova.compute.claims [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:12:22 compute-0 nova_compute[243704]: 2025-12-13 04:12:22.807 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2090606762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.349 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.356 243708 DEBUG nova.compute.provider_tree [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.370 243708 DEBUG nova.scheduler.client.report [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.389 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.389 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:12:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 95 op/s
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.464 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.465 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.486 243708 INFO nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.510 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.513 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.642 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.643 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.643 243708 INFO nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Creating image(s)
Dec 13 04:12:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2090606762' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.669 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.697 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.718 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.722 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "7d93550044def083fc4d135b7d2958d1562992d1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.722 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "7d93550044def083fc4d135b7d2958d1562992d1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:23 compute-0 nova_compute[243704]: 2025-12-13 04:12:23.728 243708 DEBUG nova.policy [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3a8b8802dc27428e82af3cfee6d31fa0', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '67177602579c40c98ca16df63bff5934', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.041 243708 DEBUG nova.virt.libvirt.imagebackend [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Image locations are: [{'url': 'rbd://437a9f04-06b7-56e3-8a4b-f52a1199dd32/images/89bd4ded-0d1d-43c0-8889-725d21f3df99/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://437a9f04-06b7-56e3-8a4b-f52a1199dd32/images/89bd4ded-0d1d-43c0-8889-725d21f3df99/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.108 243708 DEBUG nova.virt.libvirt.imagebackend [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Selected location: {'url': 'rbd://437a9f04-06b7-56e3-8a4b-f52a1199dd32/images/89bd4ded-0d1d-43c0-8889-725d21f3df99/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.109 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] cloning images/89bd4ded-0d1d-43c0-8889-725d21f3df99@snap to None/b63480b3-4ed8-4311-8742-e954945bfa74_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.430 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "7d93550044def083fc4d135b7d2958d1562992d1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.563 243708 DEBUG nova.objects.instance [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'migration_context' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.574 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.575 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Ensure instance console log exists: /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.575 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.576 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:24 compute-0 nova_compute[243704]: 2025-12-13 04:12:24.576 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:24 compute-0 ceph-mon[75071]: pgmap v1057: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 95 op/s
Dec 13 04:12:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 85 op/s
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.756 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.756 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.769 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.799 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Successfully created port: eb8d6387-c838-4ece-a475-627751effd8e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.832 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.832 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.844 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.845 243708 INFO nova.compute.claims [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:12:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Dec 13 04:12:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Dec 13 04:12:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Dec 13 04:12:25 compute-0 nova_compute[243704]: 2025-12-13 04:12:25.992 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.278 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528397625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.609 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.617s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.616 243708 DEBUG nova.compute.provider_tree [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.632 243708 DEBUG nova.scheduler.client.report [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.671 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.839s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.672 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.702 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Successfully updated port: eb8d6387-c838-4ece-a475-627751effd8e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:12:26 compute-0 ceph-mon[75071]: pgmap v1058: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 85 op/s
Dec 13 04:12:26 compute-0 ceph-mon[75071]: osdmap e207: 3 total, 3 up, 3 in
Dec 13 04:12:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/528397625' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.724 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.725 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquired lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.725 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.732 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.732 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.748 243708 INFO nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.770 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.847 243708 DEBUG nova.compute.manager [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-changed-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.847 243708 DEBUG nova.compute.manager [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing instance network info cache due to event network-changed-eb8d6387-c838-4ece-a475-627751effd8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.849 243708 DEBUG oslo_concurrency.lockutils [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.881 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.883 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.883 243708 INFO nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Creating image(s)
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.909 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.936 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.962 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:26 compute-0 nova_compute[243704]: 2025-12-13 04:12:26.968 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.002 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.014 243708 DEBUG nova.policy [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a4e44b54d008406396250df8425c1b48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3d5c68f771584a2e96300880848d9aac', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.068 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.071 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.071 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.072 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.072 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.096 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.100 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.355 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.254s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 85 op/s
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.421 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] resizing rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.503 243708 DEBUG nova.objects.instance [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'migration_context' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.515 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.516 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Ensure instance console log exists: /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.517 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.517 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:27 compute-0 nova_compute[243704]: 2025-12-13 04:12:27.517 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.206 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Successfully created port: aa0f542c-094e-48e7-9320-5384b5d4939f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.248 243708 DEBUG nova.network.neutron [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating instance_info_cache with network_info: [{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.265 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Releasing lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.266 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Instance network_info: |[{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.266 243708 DEBUG oslo_concurrency.lockutils [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.267 243708 DEBUG nova.network.neutron [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing network info cache for port eb8d6387-c838-4ece-a475-627751effd8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.271 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Start _get_guest_xml network_info=[{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T04:12:12Z,direct_url=<?>,disk_format='raw',id=89bd4ded-0d1d-43c0-8889-725d21f3df99,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1301546276',owner='67177602579c40c98ca16df63bff5934',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T04:12:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '89bd4ded-0d1d-43c0-8889-725d21f3df99'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.279 243708 WARNING nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.286 243708 DEBUG nova.virt.libvirt.host [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.287 243708 DEBUG nova.virt.libvirt.host [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.290 243708 DEBUG nova.virt.libvirt.host [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.291 243708 DEBUG nova.virt.libvirt.host [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.291 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.291 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-12-13T04:12:12Z,direct_url=<?>,disk_format='raw',id=89bd4ded-0d1d-43c0-8889-725d21f3df99,min_disk=1,min_ram=0,name='tempest-TestStampPatternsnapshot-1301546276',owner='67177602579c40c98ca16df63bff5934',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-12-13T04:12:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.291 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.292 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.293 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.293 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.293 243708 DEBUG nova.virt.hardware [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.296 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:28 compute-0 ceph-mon[75071]: pgmap v1060: 305 pgs: 305 active+clean; 248 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.9 MiB/s wr, 85 op/s
Dec 13 04:12:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/528496401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.861 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.899 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:28 compute-0 nova_compute[243704]: 2025-12-13 04:12:28.905 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.266 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Successfully updated port: aa0f542c-094e-48e7-9320-5384b5d4939f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.281 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.282 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquired lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.282 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.350 243708 DEBUG nova.compute.manager [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-changed-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.350 243708 DEBUG nova.compute.manager [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Refreshing instance network info cache due to event network-changed-aa0f542c-094e-48e7-9320-5384b5d4939f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.351 243708 DEBUG oslo_concurrency.lockutils [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 288 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec 13 04:12:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866052976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.469 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.471 243708 DEBUG nova.virt.libvirt.vif [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-499670233',display_name='tempest-TestStampPattern-server-499670233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-499670233',id=4,image_ref='89bd4ded-0d1d-43c0-8889-725d21f3df99',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-jsx7cl2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='b050eb13-af7e-4bd1-88e6-fcb2d100ffc8',image_min_disk='1',image_min_ram='0',image_owner_id='67177602579c40c98ca16df63bff5934',image_owner_project_name='tempest-TestStampPattern-102097859',image_owner_user_name='tempest-TestStampPattern-102097859-project-member',image_user_id='3a8b8802dc27428e82af3cfee6d31fa0',network_allocated='True',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:23Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b63480b3-4ed8-4311-8742-e954945bfa74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.471 243708 DEBUG nova.network.os_vif_util [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.472 243708 DEBUG nova.network.os_vif_util [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.474 243708 DEBUG nova.objects.instance [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'pci_devices' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.496 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <uuid>b63480b3-4ed8-4311-8742-e954945bfa74</uuid>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <name>instance-00000004</name>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:name>tempest-TestStampPattern-server-499670233</nova:name>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:12:28</nova:creationTime>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:user uuid="3a8b8802dc27428e82af3cfee6d31fa0">tempest-TestStampPattern-102097859-project-member</nova:user>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:project uuid="67177602579c40c98ca16df63bff5934">tempest-TestStampPattern-102097859</nova:project>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="89bd4ded-0d1d-43c0-8889-725d21f3df99"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <nova:port uuid="eb8d6387-c838-4ece-a475-627751effd8e">
Dec 13 04:12:29 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <system>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="serial">b63480b3-4ed8-4311-8742-e954945bfa74</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="uuid">b63480b3-4ed8-4311-8742-e954945bfa74</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </system>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <os>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </os>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <features>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </features>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/b63480b3-4ed8-4311-8742-e954945bfa74_disk">
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/b63480b3-4ed8-4311-8742-e954945bfa74_disk.config">
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:29 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:ee:62:18"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <target dev="tapeb8d6387-c8"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/console.log" append="off"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <video>
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </video>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <input type="keyboard" bus="usb"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:12:29 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:12:29 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:12:29 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:12:29 compute-0 nova_compute[243704]: </domain>
Dec 13 04:12:29 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.497 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Preparing to wait for external event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.497 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.498 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.498 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.499 243708 DEBUG nova.virt.libvirt.vif [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestStampPattern-server-499670233',display_name='tempest-TestStampPattern-server-499670233',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-499670233',id=4,image_ref='89bd4ded-0d1d-43c0-8889-725d21f3df99',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-jsx7cl2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='b050eb13-af7e-4bd1-88e6-fcb2d100ffc8',image_min_disk='1',image_min_ram='0',image_owner_id='67177602579c40c98ca16df63bff5934',image_owner_project_name='tempest-TestStampPattern-102097859',image_owner_user_name='tempest-TestStampPattern-102097859-project-member',image_user_id='3a8b8802dc27428e82af3cfee6d31fa0',network_allocated='True',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:23Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b63480b3-4ed8-4311-8742-e954945bfa74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.499 243708 DEBUG nova.network.os_vif_util [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.500 243708 DEBUG nova.network.os_vif_util [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.500 243708 DEBUG os_vif [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.501 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.501 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.502 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.505 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.509 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.509 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeb8d6387-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.509 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapeb8d6387-c8, col_values=(('external_ids', {'iface-id': 'eb8d6387-c838-4ece-a475-627751effd8e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:62:18', 'vm-uuid': 'b63480b3-4ed8-4311-8742-e954945bfa74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.511 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:29 compute-0 NetworkManager[48899]: <info>  [1765599149.5119] manager: (tapeb8d6387-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.512 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.521 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.522 243708 INFO os_vif [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8')
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.561 243708 DEBUG nova.network.neutron [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updated VIF entry in instance network info cache for port eb8d6387-c838-4ece-a475-627751effd8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.561 243708 DEBUG nova.network.neutron [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating instance_info_cache with network_info: [{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.573 243708 DEBUG oslo_concurrency.lockutils [req-450504b1-f7f2-4f84-8a90-12523c8174f8 req-664f245a-ab52-422f-84a1-32782944bbc6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.577 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.577 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.577 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No VIF found with MAC fa:16:3e:ee:62:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.578 243708 INFO nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Using config drive
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.600 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:29 compute-0 nova_compute[243704]: 2025-12-13 04:12:29.646 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/528496401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3866052976' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.359 243708 INFO nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Creating config drive at /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.365 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwcyth5yg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.492 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwcyth5yg" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.517 243708 DEBUG nova.storage.rbd_utils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] rbd image b63480b3-4ed8-4311-8742-e954945bfa74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.520 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config b63480b3-4ed8-4311-8742-e954945bfa74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.642 243708 DEBUG oslo_concurrency.processutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config b63480b3-4ed8-4311-8742-e954945bfa74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.643 243708 INFO nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Deleting local config drive /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74/disk.config because it was imported into RBD.
Dec 13 04:12:30 compute-0 kernel: tapeb8d6387-c8: entered promiscuous mode
Dec 13 04:12:30 compute-0 NetworkManager[48899]: <info>  [1765599150.7037] manager: (tapeb8d6387-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.704 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:30 compute-0 ovn_controller[145204]: 2025-12-13T04:12:30Z|00051|binding|INFO|Claiming lport eb8d6387-c838-4ece-a475-627751effd8e for this chassis.
Dec 13 04:12:30 compute-0 ovn_controller[145204]: 2025-12-13T04:12:30Z|00052|binding|INFO|eb8d6387-c838-4ece-a475-627751effd8e: Claiming fa:16:3e:ee:62:18 10.100.0.4
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.716 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:18 10.100.0.4'], port_security=['fa:16:3e:ee:62:18 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b63480b3-4ed8-4311-8742-e954945bfa74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67177602579c40c98ca16df63bff5934', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cf55ee30-2a30-425f-af3c-50a725a59497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15f16d90-5305-4b52-8186-db63310acee6, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=eb8d6387-c838-4ece-a475-627751effd8e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.717 154842 INFO neutron.agent.ovn.metadata.agent [-] Port eb8d6387-c838-4ece-a475-627751effd8e in datapath 6acff72d-3b46-4d95-b32d-8f79ce87caf9 bound to our chassis
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.720 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6acff72d-3b46-4d95-b32d-8f79ce87caf9
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.722 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:30 compute-0 ovn_controller[145204]: 2025-12-13T04:12:30Z|00053|binding|INFO|Setting lport eb8d6387-c838-4ece-a475-627751effd8e ovn-installed in OVS
Dec 13 04:12:30 compute-0 ovn_controller[145204]: 2025-12-13T04:12:30Z|00054|binding|INFO|Setting lport eb8d6387-c838-4ece-a475-627751effd8e up in Southbound
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.725 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.727 243708 DEBUG nova.network.neutron [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updating instance_info_cache with network_info: [{"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.741 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[53a616af-7a7e-45c7-8a76-0fc01116f230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 ceph-mon[75071]: pgmap v1061: 305 pgs: 305 active+clean; 288 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.5 MiB/s wr, 55 op/s
Dec 13 04:12:30 compute-0 systemd-machined[206767]: New machine qemu-4-instance-00000004.
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.754 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Releasing lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.754 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Instance network_info: |[{"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.755 243708 DEBUG oslo_concurrency.lockutils [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.755 243708 DEBUG nova.network.neutron [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Refreshing network info cache for port aa0f542c-094e-48e7-9320-5384b5d4939f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.758 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Start _get_guest_xml network_info=[{"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:12:30 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.763 243708 WARNING nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:12:30 compute-0 systemd-udevd[254871]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.775 243708 DEBUG nova.virt.libvirt.host [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.776 243708 DEBUG nova.virt.libvirt.host [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.779 243708 DEBUG nova.virt.libvirt.host [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.778 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[2320cb00-df7a-457b-9900-c7bf4f9263da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.780 243708 DEBUG nova.virt.libvirt.host [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.780 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.780 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.781 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.781 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.781 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.782 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.782 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.782 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.782 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.782 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.783 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.783 243708 DEBUG nova.virt.hardware [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.786 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[0dac37cd-3969-4bc1-a4c6-2f2e28b018dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.786 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:30 compute-0 NetworkManager[48899]: <info>  [1765599150.7965] device (tapeb8d6387-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:12:30 compute-0 NetworkManager[48899]: <info>  [1765599150.7975] device (tapeb8d6387-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.815 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[8fabd549-dbbb-4d30-b184-2f514b18e85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.835 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1ba2ce11-084a-4be6-85b1-03c1b56ab31b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6acff72d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:c5:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384085, 'reachable_time': 26332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254882, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.850 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cd84bb-591f-4ec4-b3c6-e003fd7fe3b0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6acff72d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384098, 'tstamp': 384098}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254884, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6acff72d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384101, 'tstamp': 384101}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254884, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.853 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6acff72d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:30 compute-0 nova_compute[243704]: 2025-12-13 04:12:30.854 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.855 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6acff72d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.856 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.856 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6acff72d-30, col_values=(('external_ids', {'iface-id': '09de48ad-091f-4941-8093-f0d00d05e24a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:30.857 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.177 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599151.1771443, b63480b3-4ed8-4311-8742-e954945bfa74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.178 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] VM Started (Lifecycle Event)
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.202 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.207 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599151.1772664, b63480b3-4ed8-4311-8742-e954945bfa74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.208 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] VM Paused (Lifecycle Event)
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.233 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.236 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.263 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2542657456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.322 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.341 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.345 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.574 243708 DEBUG nova.compute.manager [req-90ef4a53-e03e-4634-80f7-528729e26ebc req-f6c41b47-4301-4f61-839a-8fc1f356e449 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.575 243708 DEBUG oslo_concurrency.lockutils [req-90ef4a53-e03e-4634-80f7-528729e26ebc req-f6c41b47-4301-4f61-839a-8fc1f356e449 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.575 243708 DEBUG oslo_concurrency.lockutils [req-90ef4a53-e03e-4634-80f7-528729e26ebc req-f6c41b47-4301-4f61-839a-8fc1f356e449 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.575 243708 DEBUG oslo_concurrency.lockutils [req-90ef4a53-e03e-4634-80f7-528729e26ebc req-f6c41b47-4301-4f61-839a-8fc1f356e449 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.575 243708 DEBUG nova.compute.manager [req-90ef4a53-e03e-4634-80f7-528729e26ebc req-f6c41b47-4301-4f61-839a-8fc1f356e449 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Processing event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.576 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.588 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599151.5869348, b63480b3-4ed8-4311-8742-e954945bfa74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.589 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] VM Resumed (Lifecycle Event)
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.605 243708 DEBUG nova.virt.libvirt.driver [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.619 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.628 243708 INFO nova.virt.libvirt.driver [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Instance spawned successfully.
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.628 243708 INFO nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Took 7.99 seconds to spawn the instance on the hypervisor.
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.629 243708 DEBUG nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.631 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.665 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.698 243708 INFO nova.compute.manager [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Took 9.08 seconds to build instance.
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.718 243708 DEBUG oslo_concurrency.lockutils [None req-b3e0abb5-d8b5-44a3-9eb5-c4a8fb713382 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2542657456' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4173638968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.878 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.880 243708 DEBUG nova.virt.libvirt.vif [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-213628157',display_name='tempest-TestEncryptedCinderVolumes-server-213628157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-213628157',id=5,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFe1neUOmXvlN7mo/X/VaeV26IwttYxfT8v2SqaPs42uBESvxuJP4y7d51l+slJFM6+MMjuxdFlG0Cx1rHp3JP6TcqS5LxR7Tv6ybWdAEHIhn9jig3p1gj4C5ttTqa1FZA==',key_name='tempest-keypair-1182106795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d5c68f771584a2e96300880848d9aac',ramdisk_id='',reservation_id='r-w2n6xgdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1229723829',owner_user_name='tempest-TestEncryptedCinderVolumes-1229723829-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4e44b54d008406396250df8425c1b48',uuid=229ab4a4-03ac-4686-bd94-9b1def9ec619,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.880 243708 DEBUG nova.network.os_vif_util [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converting VIF {"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.881 243708 DEBUG nova.network.os_vif_util [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.882 243708 DEBUG nova.objects.instance [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'pci_devices' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.899 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <uuid>229ab4a4-03ac-4686-bd94-9b1def9ec619</uuid>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <name>instance-00000005</name>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-213628157</nova:name>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:12:30</nova:creationTime>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:user uuid="a4e44b54d008406396250df8425c1b48">tempest-TestEncryptedCinderVolumes-1229723829-project-member</nova:user>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:project uuid="3d5c68f771584a2e96300880848d9aac">tempest-TestEncryptedCinderVolumes-1229723829</nova:project>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <nova:port uuid="aa0f542c-094e-48e7-9320-5384b5d4939f">
Dec 13 04:12:31 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <system>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="serial">229ab4a4-03ac-4686-bd94-9b1def9ec619</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="uuid">229ab4a4-03ac-4686-bd94-9b1def9ec619</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </system>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <os>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </os>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <features>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </features>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/229ab4a4-03ac-4686-bd94-9b1def9ec619_disk">
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config">
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:31 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:e3:9a:66"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <target dev="tapaa0f542c-09"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/console.log" append="off"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <video>
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </video>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:12:31 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:12:31 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:12:31 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:12:31 compute-0 nova_compute[243704]: </domain>
Dec 13 04:12:31 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.904 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Preparing to wait for external event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.904 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.904 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.905 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.905 243708 DEBUG nova.virt.libvirt.vif [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-213628157',display_name='tempest-TestEncryptedCinderVolumes-server-213628157',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-213628157',id=5,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFe1neUOmXvlN7mo/X/VaeV26IwttYxfT8v2SqaPs42uBESvxuJP4y7d51l+slJFM6+MMjuxdFlG0Cx1rHp3JP6TcqS5LxR7Tv6ybWdAEHIhn9jig3p1gj4C5ttTqa1FZA==',key_name='tempest-keypair-1182106795',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3d5c68f771584a2e96300880848d9aac',ramdisk_id='',reservation_id='r-w2n6xgdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1229723829',owner_user_name='tempest-TestEncryptedCinderVolumes-1229723829-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:26Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4e44b54d008406396250df8425c1b48',uuid=229ab4a4-03ac-4686-bd94-9b1def9ec619,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.906 243708 DEBUG nova.network.os_vif_util [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converting VIF {"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.906 243708 DEBUG nova.network.os_vif_util [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.907 243708 DEBUG os_vif [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.907 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.908 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.908 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.911 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.911 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaa0f542c-09, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.912 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapaa0f542c-09, col_values=(('external_ids', {'iface-id': 'aa0f542c-094e-48e7-9320-5384b5d4939f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e3:9a:66', 'vm-uuid': '229ab4a4-03ac-4686-bd94-9b1def9ec619'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:31 compute-0 NetworkManager[48899]: <info>  [1765599151.9141] manager: (tapaa0f542c-09): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/43)
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.913 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.917 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.918 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.919 243708 INFO os_vif [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09')
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.964 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.965 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.965 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No VIF found with MAC fa:16:3e:e3:9a:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.965 243708 INFO nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Using config drive
Dec 13 04:12:31 compute-0 nova_compute[243704]: 2025-12-13 04:12:31.986 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:32 compute-0 podman[254991]: 2025-12-13 04:12:32.052118056 +0000 UTC m=+0.092775909 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 04:12:32 compute-0 nova_compute[243704]: 2025-12-13 04:12:32.070 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:32 compute-0 nova_compute[243704]: 2025-12-13 04:12:32.500 243708 INFO nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Creating config drive at /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config
Dec 13 04:12:32 compute-0 nova_compute[243704]: 2025-12-13 04:12:32.512 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_bfpfspu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:32 compute-0 ceph-mon[75071]: pgmap v1062: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Dec 13 04:12:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4173638968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.024 243708 DEBUG nova.network.neutron [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updated VIF entry in instance network info cache for port aa0f542c-094e-48e7-9320-5384b5d4939f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.024 243708 DEBUG nova.network.neutron [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updating instance_info_cache with network_info: [{"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.039 243708 DEBUG oslo_concurrency.lockutils [req-a4d8c60c-acdf-4c7d-a5b7-c23962d48bb8 req-6033fb8c-0da6-47df-8839-60d396f63b05 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.365 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_bfpfspu" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.390 243708 DEBUG nova.storage.rbd_utils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] rbd image 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.393 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.515 243708 DEBUG oslo_concurrency.processutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config 229ab4a4-03ac-4686-bd94-9b1def9ec619_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.516 243708 INFO nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Deleting local config drive /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619/disk.config because it was imported into RBD.
Dec 13 04:12:33 compute-0 kernel: tapaa0f542c-09: entered promiscuous mode
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.5580] manager: (tapaa0f542c-09): new Tun device (/org/freedesktop/NetworkManager/Devices/44)
Dec 13 04:12:33 compute-0 ovn_controller[145204]: 2025-12-13T04:12:33Z|00055|binding|INFO|Claiming lport aa0f542c-094e-48e7-9320-5384b5d4939f for this chassis.
Dec 13 04:12:33 compute-0 ovn_controller[145204]: 2025-12-13T04:12:33Z|00056|binding|INFO|aa0f542c-094e-48e7-9320-5384b5d4939f: Claiming fa:16:3e:e3:9a:66 10.100.0.10
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.559 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.5683] device (tapaa0f542c-09): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.5706] device (tapaa0f542c-09): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.571 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:9a:66 10.100.0.10'], port_security=['fa:16:3e:e3:9a:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '229ab4a4-03ac-4686-bd94-9b1def9ec619', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6243b53-29fa-418e-8550-3cbf311cc62c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d5c68f771584a2e96300880848d9aac', 'neutron:revision_number': '2', 'neutron:security_group_ids': '351f9ce1-80fd-4c08-ab4a-72ebe95ba7f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdb2c1ea-01c6-49bd-824b-5d10b545a135, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=aa0f542c-094e-48e7-9320-5384b5d4939f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.572 154842 INFO neutron.agent.ovn.metadata.agent [-] Port aa0f542c-094e-48e7-9320-5384b5d4939f in datapath c6243b53-29fa-418e-8550-3cbf311cc62c bound to our chassis
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.573 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c6243b53-29fa-418e-8550-3cbf311cc62c
Dec 13 04:12:33 compute-0 ovn_controller[145204]: 2025-12-13T04:12:33Z|00057|binding|INFO|Setting lport aa0f542c-094e-48e7-9320-5384b5d4939f ovn-installed in OVS
Dec 13 04:12:33 compute-0 ovn_controller[145204]: 2025-12-13T04:12:33Z|00058|binding|INFO|Setting lport aa0f542c-094e-48e7-9320-5384b5d4939f up in Southbound
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.577 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.584 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c706d436-2326-4abf-8cb0-96b376569818]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.585 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc6243b53-21 in ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.587 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc6243b53-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.587 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[aceb0187-8294-4b71-997a-4a4ade0a5601]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.587 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[883c07db-cee7-4d31-83bc-dff6e9ac2140]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 systemd-machined[206767]: New machine qemu-5-instance-00000005.
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.602 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[ed3bee52-9fed-4a4a-b374-e4f2ac17b9ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.626 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4b46f8-da97-479f-acb0-5ea769f7bad1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.652 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[e93343cf-7849-41d6-b1e4-10d0207ad306]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.657 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[25371d8e-be04-4609-961b-d987fa418689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.6599] manager: (tapc6243b53-20): new Veth device (/org/freedesktop/NetworkManager/Devices/45)
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.690 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[8d21d503-3da8-4b95-aa71-b372383faea1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.693 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[765b4a43-3e4e-4f48-9573-9645c052c3eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.7131] device (tapc6243b53-20): carrier: link connected
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.719 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b6ae8cc5-9dfc-4b87-8d82-0157384ab299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.737 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6bbaa9a3-a9aa-4e57-a563-90b76978b793]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6243b53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:3e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389239, 'reachable_time': 38975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255111, 'error': None, 'target': 'ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.754 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3e1fb43a-8c9f-4d54-91f2-bbdde3f1a0ef]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef0:3e4a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 389239, 'tstamp': 389239}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255112, 'error': None, 'target': 'ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.774 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4c602b51-1f39-4fda-abb4-859614d8383c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc6243b53-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f0:3e:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389239, 'reachable_time': 38975, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255113, 'error': None, 'target': 'ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.803 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[519db08e-c8bd-4994-8ae7-fe41db47b5aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.857 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a6aa3a96-87ab-4495-95bc-c8a9c102d6df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.859 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6243b53-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.859 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.860 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6243b53-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:33 compute-0 NetworkManager[48899]: <info>  [1765599153.8624] manager: (tapc6243b53-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.861 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 kernel: tapc6243b53-20: entered promiscuous mode
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.864 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.865 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc6243b53-20, col_values=(('external_ids', {'iface-id': 'ac0cc680-76fb-471b-ad7d-5ee0e48ab157'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.866 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 ovn_controller[145204]: 2025-12-13T04:12:33Z|00059|binding|INFO|Releasing lport ac0cc680-76fb-471b-ad7d-5ee0e48ab157 from this chassis (sb_readonly=0)
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.867 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.868 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c6243b53-29fa-418e-8550-3cbf311cc62c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c6243b53-29fa-418e-8550-3cbf311cc62c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.869 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[02cb329d-bd05-49dd-944b-825e10b3ee5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.870 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-c6243b53-29fa-418e-8550-3cbf311cc62c
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/c6243b53-29fa-418e-8550-3cbf311cc62c.pid.haproxy
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID c6243b53-29fa-418e-8550-3cbf311cc62c
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:12:33 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:33.872 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c', 'env', 'PROCESS_TAG=haproxy-c6243b53-29fa-418e-8550-3cbf311cc62c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c6243b53-29fa-418e-8550-3cbf311cc62c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:12:33 compute-0 nova_compute[243704]: 2025-12-13 04:12:33.881 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.219 243708 DEBUG nova.compute.manager [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.219 243708 DEBUG oslo_concurrency.lockutils [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.220 243708 DEBUG oslo_concurrency.lockutils [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.220 243708 DEBUG oslo_concurrency.lockutils [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.220 243708 DEBUG nova.compute.manager [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] No waiting events found dispatching network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.220 243708 WARNING nova.compute.manager [req-c44aecbf-5da3-4201-b108-193b627f2b76 req-587b18ae-7da2-4f82-b3e4-6ba4312d48ae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received unexpected event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e for instance with vm_state active and task_state None.
Dec 13 04:12:34 compute-0 podman[255163]: 2025-12-13 04:12:34.303466556 +0000 UTC m=+0.053939335 container create 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:12:34 compute-0 systemd[1]: Started libpod-conmon-0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a.scope.
Dec 13 04:12:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:34 compute-0 podman[255163]: 2025-12-13 04:12:34.27562103 +0000 UTC m=+0.026093799 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:12:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ec5f0f0001d7c4282644cd04b9c476ae210667f51a4cc71f27fea2f42f06f16e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.383 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599154.3823543, 229ab4a4-03ac-4686-bd94-9b1def9ec619 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.383 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] VM Started (Lifecycle Event)
Dec 13 04:12:34 compute-0 podman[255163]: 2025-12-13 04:12:34.391847775 +0000 UTC m=+0.142320584 container init 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:12:34 compute-0 podman[255163]: 2025-12-13 04:12:34.398820544 +0000 UTC m=+0.149293323 container start 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.403 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.412 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599154.3835716, 229ab4a4-03ac-4686-bd94-9b1def9ec619 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.413 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] VM Paused (Lifecycle Event)
Dec 13 04:12:34 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [NOTICE]   (255206) : New worker (255208) forked
Dec 13 04:12:34 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [NOTICE]   (255206) : Loading success.
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.426 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.429 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:34 compute-0 nova_compute[243704]: 2025-12-13 04:12:34.467 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:34 compute-0 ceph-mon[75071]: pgmap v1063: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 2.2 MiB/s wr, 71 op/s
Dec 13 04:12:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:35.087 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:35.088 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:35.088 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 170 op/s
Dec 13 04:12:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:36 compute-0 ceph-mon[75071]: pgmap v1064: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 170 op/s
Dec 13 04:12:36 compute-0 nova_compute[243704]: 2025-12-13 04:12:36.916 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:37 compute-0 nova_compute[243704]: 2025-12-13 04:12:37.072 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:37 compute-0 nova_compute[243704]: 2025-12-13 04:12:37.340 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 147 op/s
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.243 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.263 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Triggering sync for uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.264 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Triggering sync for uuid b63480b3-4ed8-4311-8742-e954945bfa74 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.264 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Triggering sync for uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.264 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.265 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.265 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.265 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.265 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.305 243708 DEBUG nova.compute.manager [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-changed-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.305 243708 DEBUG nova.compute.manager [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing instance network info cache due to event network-changed-eb8d6387-c838-4ece-a475-627751effd8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.305 243708 DEBUG oslo_concurrency.lockutils [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.306 243708 DEBUG oslo_concurrency.lockutils [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.306 243708 DEBUG nova.network.neutron [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing network info cache for port eb8d6387-c838-4ece-a475-627751effd8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.308 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:38 compute-0 nova_compute[243704]: 2025-12-13 04:12:38.309 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:38 compute-0 ceph-mon[75071]: pgmap v1065: 305 pgs: 305 active+clean; 295 MiB data, 371 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.9 MiB/s wr, 147 op/s
Dec 13 04:12:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.417 243708 DEBUG nova.compute.manager [req-49b0a6fd-76aa-44ed-b104-79cc5ef976d0 req-0703d540-c9bc-451f-8790-69e88c2e883a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.418 243708 DEBUG oslo_concurrency.lockutils [req-49b0a6fd-76aa-44ed-b104-79cc5ef976d0 req-0703d540-c9bc-451f-8790-69e88c2e883a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.418 243708 DEBUG oslo_concurrency.lockutils [req-49b0a6fd-76aa-44ed-b104-79cc5ef976d0 req-0703d540-c9bc-451f-8790-69e88c2e883a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.419 243708 DEBUG oslo_concurrency.lockutils [req-49b0a6fd-76aa-44ed-b104-79cc5ef976d0 req-0703d540-c9bc-451f-8790-69e88c2e883a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.419 243708 DEBUG nova.compute.manager [req-49b0a6fd-76aa-44ed-b104-79cc5ef976d0 req-0703d540-c9bc-451f-8790-69e88c2e883a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Processing event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.420 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.423 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599160.4229624, 229ab4a4-03ac-4686-bd94-9b1def9ec619 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.423 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] VM Resumed (Lifecycle Event)
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.429 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.432 243708 INFO nova.virt.libvirt.driver [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Instance spawned successfully.
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.432 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.458 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.463 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.468 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.468 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.469 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.469 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.470 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.470 243708 DEBUG nova.virt.libvirt.driver [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.498 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.525 243708 INFO nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Took 13.64 seconds to spawn the instance on the hypervisor.
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.526 243708 DEBUG nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:12:40
Dec 13 04:12:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:12:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:12:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'vms', '.rgw.root']
Dec 13 04:12:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.584 243708 INFO nova.compute.manager [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Took 14.77 seconds to build instance.
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.596 243708 DEBUG oslo_concurrency.lockutils [None req-cbbd4836-4959-4421-a5ac-eb40345c32b2 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.596 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 2.331s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.596 243708 INFO nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.596 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:40 compute-0 ceph-mon[75071]: pgmap v1066: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 142 op/s
Dec 13 04:12:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:40 compute-0 podman[255217]: 2025-12-13 04:12:40.926555772 +0000 UTC m=+0.069336153 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.964 243708 DEBUG nova.network.neutron [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updated VIF entry in instance network info cache for port eb8d6387-c838-4ece-a475-627751effd8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.964 243708 DEBUG nova.network.neutron [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating instance_info_cache with network_info: [{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.984 243708 DEBUG oslo_concurrency.lockutils [req-e7da6f37-8e54-416a-9dbf-d173b44f9841 req-74b9c354-b740-449f-93f5-c60f4b0d1a66 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.999 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:40 compute-0 nova_compute[243704]: 2025-12-13 04:12:40.999 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.018 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.080 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.081 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.088 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.089 243708 INFO nova.compute.claims [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.230 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1067: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 545 KiB/s wr, 106 op/s
Dec 13 04:12:41 compute-0 ceph-mon[75071]: pgmap v1067: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 545 KiB/s wr, 106 op/s
Dec 13 04:12:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/52640602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.916 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.920 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.924 243708 DEBUG nova.compute.provider_tree [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.937 243708 DEBUG nova.scheduler.client.report [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.955 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.956 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.991 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:12:41 compute-0 nova_compute[243704]: 2025-12-13 04:12:41.992 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.006 243708 INFO nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.021 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.073 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.098 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.099 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.100 243708 INFO nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Creating image(s)
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.121 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.144 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.166 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.170 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.193 243708 DEBUG nova.policy [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f960834d4908443d9efd683028b08468', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c28beacf423b4e2392d93f6083d70ed7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.231 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.232 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.232 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.233 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.255 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.258 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 ce7de347-5ab9-49da-8b43-01bcb404b401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.485 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 ce7de347-5ab9-49da-8b43-01bcb404b401_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.227s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.540 243708 DEBUG nova.compute.manager [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.540 243708 DEBUG oslo_concurrency.lockutils [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.541 243708 DEBUG oslo_concurrency.lockutils [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.541 243708 DEBUG oslo_concurrency.lockutils [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.541 243708 DEBUG nova.compute.manager [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] No waiting events found dispatching network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.541 243708 WARNING nova.compute.manager [req-ee85f51c-24d6-4ce7-88fc-5121fc397610 req-c2f80919-ae3f-4076-b0b7-403f03a5384c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received unexpected event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f for instance with vm_state active and task_state None.
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.574 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] resizing rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.654 243708 DEBUG nova.objects.instance [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lazy-loading 'migration_context' on Instance uuid ce7de347-5ab9-49da-8b43-01bcb404b401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.668 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.669 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Ensure instance console log exists: /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.670 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.670 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.670 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:12:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:12:42 compute-0 nova_compute[243704]: 2025-12-13 04:12:42.803 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Successfully created port: 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:12:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/52640602' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1068: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 86 op/s
Dec 13 04:12:43 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:43.844 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:12:43 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:43.845 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:12:43 compute-0 ceph-mon[75071]: pgmap v1068: 305 pgs: 305 active+clean; 295 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 27 KiB/s wr, 86 op/s
Dec 13 04:12:43 compute-0 nova_compute[243704]: 2025-12-13 04:12:43.856 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:44 compute-0 ovn_controller[145204]: 2025-12-13T04:12:44Z|00010|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Dec 13 04:12:44 compute-0 ovn_controller[145204]: 2025-12-13T04:12:44Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ee:62:18 10.100.0.4
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.406 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Successfully updated port: 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.420 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.420 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquired lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.420 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.582 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.632 243708 DEBUG nova.compute.manager [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-changed-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.632 243708 DEBUG nova.compute.manager [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Refreshing instance network info cache due to event network-changed-aa0f542c-094e-48e7-9320-5384b5d4939f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.632 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.633 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:44 compute-0 nova_compute[243704]: 2025-12-13 04:12:44.633 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Refreshing network info cache for port aa0f542c-094e-48e7-9320-5384b5d4939f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:12:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 356 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 224 op/s
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.447 243708 DEBUG nova.network.neutron [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Updating instance_info_cache with network_info: [{"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.468 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Releasing lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.468 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Instance network_info: |[{"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.470 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Start _get_guest_xml network_info=[{"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.473 243708 WARNING nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.478 243708 DEBUG nova.virt.libvirt.host [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.479 243708 DEBUG nova.virt.libvirt.host [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.483 243708 DEBUG nova.virt.libvirt.host [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.483 243708 DEBUG nova.virt.libvirt.host [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.484 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.484 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.485 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.486 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.486 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.486 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.486 243708 DEBUG nova.virt.hardware [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.489 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:12:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1223177368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:12:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1223177368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.988 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updated VIF entry in instance network info cache for port aa0f542c-094e-48e7-9320-5384b5d4939f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:12:45 compute-0 nova_compute[243704]: 2025-12-13 04:12:45.989 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updating instance_info_cache with network_info: [{"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.004 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-229ab4a4-03ac-4686-bd94-9b1def9ec619" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.004 243708 DEBUG nova.compute.manager [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-changed-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.004 243708 DEBUG nova.compute.manager [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Refreshing instance network info cache due to event network-changed-98c694f1-e02c-4f04-8cf3-5d4c0673ec79. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.004 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.005 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.005 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Refreshing network info cache for port 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:12:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838511064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.042 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.060 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.063 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:46 compute-0 ceph-mon[75071]: pgmap v1069: 305 pgs: 305 active+clean; 356 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 4.7 MiB/s rd, 2.3 MiB/s wr, 224 op/s
Dec 13 04:12:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1223177368' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1223177368' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1838511064' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3516646608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.619 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.621 243708 DEBUG nova.virt.libvirt.vif [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-242264721',display_name='tempest-VolumesActionsTest-instance-242264721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-242264721',id=6,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c28beacf423b4e2392d93f6083d70ed7',ramdisk_id='',reservation_id='r-a90gn1od',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1499519509',owner_user_name='tempest-VolumesActionsTest-1499519509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:42Z,user_data=None,user_id='f960834d4908443d9efd683028b08468',uuid=ce7de347-5ab9-49da-8b43-01bcb404b401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.621 243708 DEBUG nova.network.os_vif_util [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converting VIF {"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.622 243708 DEBUG nova.network.os_vif_util [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.623 243708 DEBUG nova.objects.instance [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lazy-loading 'pci_devices' on Instance uuid ce7de347-5ab9-49da-8b43-01bcb404b401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.640 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <uuid>ce7de347-5ab9-49da-8b43-01bcb404b401</uuid>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <name>instance-00000006</name>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesActionsTest-instance-242264721</nova:name>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:12:45</nova:creationTime>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:user uuid="f960834d4908443d9efd683028b08468">tempest-VolumesActionsTest-1499519509-project-member</nova:user>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:project uuid="c28beacf423b4e2392d93f6083d70ed7">tempest-VolumesActionsTest-1499519509</nova:project>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <nova:port uuid="98c694f1-e02c-4f04-8cf3-5d4c0673ec79">
Dec 13 04:12:46 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <system>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="serial">ce7de347-5ab9-49da-8b43-01bcb404b401</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="uuid">ce7de347-5ab9-49da-8b43-01bcb404b401</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </system>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <os>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </os>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <features>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </features>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/ce7de347-5ab9-49da-8b43-01bcb404b401_disk">
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config">
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </source>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:12:46 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:56:bf:7e"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <target dev="tap98c694f1-e0"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/console.log" append="off"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <video>
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </video>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:12:46 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:12:46 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:12:46 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:12:46 compute-0 nova_compute[243704]: </domain>
Dec 13 04:12:46 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.641 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Preparing to wait for external event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.641 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.641 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.642 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.642 243708 DEBUG nova.virt.libvirt.vif [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-242264721',display_name='tempest-VolumesActionsTest-instance-242264721',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-242264721',id=6,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c28beacf423b4e2392d93f6083d70ed7',ramdisk_id='',reservation_id='r-a90gn1od',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1499519509',owner_user_name='tempest-VolumesActionsTest-1499519509-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:12:42Z,user_data=None,user_id='f960834d4908443d9efd683028b08468',uuid=ce7de347-5ab9-49da-8b43-01bcb404b401,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.643 243708 DEBUG nova.network.os_vif_util [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converting VIF {"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.644 243708 DEBUG nova.network.os_vif_util [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.644 243708 DEBUG os_vif [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.648 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.648 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.649 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.652 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.652 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98c694f1-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.653 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98c694f1-e0, col_values=(('external_ids', {'iface-id': '98c694f1-e02c-4f04-8cf3-5d4c0673ec79', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:bf:7e', 'vm-uuid': 'ce7de347-5ab9-49da-8b43-01bcb404b401'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.655 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:46 compute-0 NetworkManager[48899]: <info>  [1765599166.6564] manager: (tap98c694f1-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.658 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.663 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.664 243708 INFO os_vif [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0')
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.718 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.718 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.719 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] No VIF found with MAC fa:16:3e:56:bf:7e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.719 243708 INFO nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Using config drive
Dec 13 04:12:46 compute-0 nova_compute[243704]: 2025-12-13 04:12:46.746 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.053 243708 INFO nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Creating config drive at /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.058 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzdow9yfk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.079 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.104 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Updated VIF entry in instance network info cache for port 98c694f1-e02c-4f04-8cf3-5d4c0673ec79. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.105 243708 DEBUG nova.network.neutron [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Updating instance_info_cache with network_info: [{"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.116 243708 DEBUG oslo_concurrency.lockutils [req-a0f60cbc-3fe6-4c06-b208-518a2e493075 req-f01d447d-6459-4aff-be0f-4701ebc9bdda 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-ce7de347-5ab9-49da-8b43-01bcb404b401" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.187 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzdow9yfk" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.213 243708 DEBUG nova.storage.rbd_utils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] rbd image ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.218 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 356 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.3 MiB/s wr, 138 op/s
Dec 13 04:12:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3516646608' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.535 243708 DEBUG oslo_concurrency.processutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config ce7de347-5ab9-49da-8b43-01bcb404b401_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.317s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.536 243708 INFO nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Deleting local config drive /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401/disk.config because it was imported into RBD.
Dec 13 04:12:47 compute-0 kernel: tap98c694f1-e0: entered promiscuous mode
Dec 13 04:12:47 compute-0 NetworkManager[48899]: <info>  [1765599167.5848] manager: (tap98c694f1-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/48)
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.584 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:47 compute-0 ovn_controller[145204]: 2025-12-13T04:12:47Z|00060|binding|INFO|Claiming lport 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 for this chassis.
Dec 13 04:12:47 compute-0 ovn_controller[145204]: 2025-12-13T04:12:47Z|00061|binding|INFO|98c694f1-e02c-4f04-8cf3-5d4c0673ec79: Claiming fa:16:3e:56:bf:7e 10.100.0.12
Dec 13 04:12:47 compute-0 ovn_controller[145204]: 2025-12-13T04:12:47Z|00062|binding|INFO|Setting lport 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 ovn-installed in OVS
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.606 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:47 compute-0 nova_compute[243704]: 2025-12-13 04:12:47.610 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:47 compute-0 systemd-udevd[255563]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:12:47 compute-0 systemd-machined[206767]: New machine qemu-6-instance-00000006.
Dec 13 04:12:47 compute-0 NetworkManager[48899]: <info>  [1765599167.6301] device (tap98c694f1-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:12:47 compute-0 NetworkManager[48899]: <info>  [1765599167.6311] device (tap98c694f1-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:12:47 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Dec 13 04:12:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:47.848 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:47.910 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:bf:7e 10.100.0.12'], port_security=['fa:16:3e:56:bf:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ce7de347-5ab9-49da-8b43-01bcb404b401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca3fa59c-a888-49ea-a64c-561713a1429e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c28beacf423b4e2392d93f6083d70ed7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ef548606-30b3-4751-a266-c1ea3f491683', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c8df8a-b5c2-45f0-9e7e-631232ad5125, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=98c694f1-e02c-4f04-8cf3-5d4c0673ec79) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:12:47 compute-0 ovn_controller[145204]: 2025-12-13T04:12:47Z|00063|binding|INFO|Setting lport 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 up in Southbound
Dec 13 04:12:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:47.911 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 in datapath ca3fa59c-a888-49ea-a64c-561713a1429e bound to our chassis
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.064 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ca3fa59c-a888-49ea-a64c-561713a1429e
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.077 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d252eeb7-e680-4a75-925b-92529e1f5784]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.079 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapca3fa59c-a1 in ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.083 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapca3fa59c-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.083 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[eca10b6a-b77a-4433-9c82-7f759024315a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.084 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2aadce5c-9c50-4381-96fd-ef02bbe2a300]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.102 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[9d70e04f-7f94-4b6c-9ace-bf7af14652f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.134 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c78de0fa-8c13-43dc-a02b-9cb6ddd6f290]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.164 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[3956c7d4-0723-446a-963d-bca33fd264a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 systemd-udevd[255565]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:12:48 compute-0 NetworkManager[48899]: <info>  [1765599168.1732] manager: (tapca3fa59c-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/49)
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.175 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[44cec9f0-f6cb-42fe-8fb3-a48927b6173b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.219 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[8a486957-bf8c-4bce-ae32-4860f951edaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.222 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[acaeae78-4d04-41a8-aa20-f130cb7729fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 NetworkManager[48899]: <info>  [1765599168.2491] device (tapca3fa59c-a0): carrier: link connected
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.258 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[e690fe07-6c54-428a-86d1-5b14e3afa024]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.265 243708 DEBUG nova.compute.manager [req-bc810aa2-6fcc-48b2-9375-ec68937ec6a3 req-65a341d0-2954-4feb-8559-82dc6ff1f7bb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.265 243708 DEBUG oslo_concurrency.lockutils [req-bc810aa2-6fcc-48b2-9375-ec68937ec6a3 req-65a341d0-2954-4feb-8559-82dc6ff1f7bb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.265 243708 DEBUG oslo_concurrency.lockutils [req-bc810aa2-6fcc-48b2-9375-ec68937ec6a3 req-65a341d0-2954-4feb-8559-82dc6ff1f7bb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.266 243708 DEBUG oslo_concurrency.lockutils [req-bc810aa2-6fcc-48b2-9375-ec68937ec6a3 req-65a341d0-2954-4feb-8559-82dc6ff1f7bb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.266 243708 DEBUG nova.compute.manager [req-bc810aa2-6fcc-48b2-9375-ec68937ec6a3 req-65a341d0-2954-4feb-8559-82dc6ff1f7bb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Processing event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.276 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[67788318-936b-4448-9ed2-0b099d86411e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca3fa59c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:03:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390692, 'reachable_time': 42157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255636, 'error': None, 'target': 'ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.295 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c0d4dfad-1732-4988-9003-3df1055549c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:301'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 390692, 'tstamp': 390692}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255638, 'error': None, 'target': 'ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.315 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7e731730-8d35-43ec-84cb-a78674d457e3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapca3fa59c-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:03:01'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390692, 'reachable_time': 42157, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 255639, 'error': None, 'target': 'ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.345 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2f4b50-7d60-4a8b-9c09-0160f4fc76c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.372 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599168.3719127, ce7de347-5ab9-49da-8b43-01bcb404b401 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.373 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] VM Started (Lifecycle Event)
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.375 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.385 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.388 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.389 243708 INFO nova.virt.libvirt.driver [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Instance spawned successfully.
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.390 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.393 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.407 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.408 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599168.3727083, ce7de347-5ab9-49da-8b43-01bcb404b401 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.408 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] VM Paused (Lifecycle Event)
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.413 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.414 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.414 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.415 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.415 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.416 243708 DEBUG nova.virt.libvirt.driver [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.433 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.434 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dfc6673f-9a2a-4a6a-9b08-95290773cf50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.435 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca3fa59c-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.436 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.436 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599168.377634, ce7de347-5ab9-49da-8b43-01bcb404b401 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.436 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapca3fa59c-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.436 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] VM Resumed (Lifecycle Event)
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.459 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.462 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.471 243708 INFO nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Took 6.37 seconds to spawn the instance on the hypervisor.
Dec 13 04:12:48 compute-0 NetworkManager[48899]: <info>  [1765599168.4719] manager: (tapca3fa59c-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Dec 13 04:12:48 compute-0 kernel: tapca3fa59c-a0: entered promiscuous mode
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.471 243708 DEBUG nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.472 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.474 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.475 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapca3fa59c-a0, col_values=(('external_ids', {'iface-id': '30d1f87b-b0d2-43ec-9fd9-015be8b4f69c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.476 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.479 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ca3fa59c-a888-49ea-a64c-561713a1429e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ca3fa59c-a888-49ea-a64c-561713a1429e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.479 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.479 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f5aaf570-4a4e-4706-9f99-de3244f1dde4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.480 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-ca3fa59c-a888-49ea-a64c-561713a1429e
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/ca3fa59c-a888-49ea-a64c-561713a1429e.pid.haproxy
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID ca3fa59c-a888-49ea-a64c-561713a1429e
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:12:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:48.481 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e', 'env', 'PROCESS_TAG=haproxy-ca3fa59c-a888-49ea-a64c-561713a1429e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ca3fa59c-a888-49ea-a64c-561713a1429e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:12:48 compute-0 ovn_controller[145204]: 2025-12-13T04:12:48Z|00064|binding|INFO|Releasing lport 30d1f87b-b0d2-43ec-9fd9-015be8b4f69c from this chassis (sb_readonly=0)
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.482 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.496 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.525 243708 INFO nova.compute.manager [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Took 7.47 seconds to build instance.
Dec 13 04:12:48 compute-0 ceph-mon[75071]: pgmap v1070: 305 pgs: 305 active+clean; 356 MiB data, 395 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.3 MiB/s wr, 138 op/s
Dec 13 04:12:48 compute-0 nova_compute[243704]: 2025-12-13 04:12:48.539 243708 DEBUG oslo_concurrency.lockutils [None req-f915e8de-6426-4429-a936-c5a1a516fb43 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.540s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:48 compute-0 ovn_controller[145204]: 2025-12-13T04:12:48Z|00012|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.4
Dec 13 04:12:48 compute-0 ovn_controller[145204]: 2025-12-13T04:12:48Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:ee:62:18 10.100.0.4
Dec 13 04:12:48 compute-0 podman[255672]: 2025-12-13 04:12:48.841072179 +0000 UTC m=+0.050274645 container create 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:12:48 compute-0 systemd[1]: Started libpod-conmon-9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a.scope.
Dec 13 04:12:48 compute-0 podman[255672]: 2025-12-13 04:12:48.814864747 +0000 UTC m=+0.024067263 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:12:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:12:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ebfd3a479109f7000a07a460f60bed1b772add079cfc8f1b4ece4fb8498c4f6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:12:48 compute-0 podman[255672]: 2025-12-13 04:12:48.943480529 +0000 UTC m=+0.152683015 container init 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:12:48 compute-0 podman[255672]: 2025-12-13 04:12:48.952762491 +0000 UTC m=+0.161964957 container start 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 04:12:48 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [NOTICE]   (255691) : New worker (255693) forked
Dec 13 04:12:48 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [NOTICE]   (255691) : Loading success.
Dec 13 04:12:49 compute-0 ovn_controller[145204]: 2025-12-13T04:12:49Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ee:62:18 10.100.0.4
Dec 13 04:12:49 compute-0 ovn_controller[145204]: 2025-12-13T04:12:49Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ee:62:18 10.100.0.4
Dec 13 04:12:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.338 243708 DEBUG nova.compute.manager [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.338 243708 DEBUG oslo_concurrency.lockutils [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.339 243708 DEBUG oslo_concurrency.lockutils [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.339 243708 DEBUG oslo_concurrency.lockutils [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.339 243708 DEBUG nova.compute.manager [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] No waiting events found dispatching network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:12:50 compute-0 nova_compute[243704]: 2025-12-13 04:12:50.340 243708 WARNING nova.compute.manager [req-321a5505-7c1e-4cd2-b8de-34c9c1ce9694 req-f0a13262-6578-4ad0-8a3d-35d38b7fdd1b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received unexpected event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 for instance with vm_state active and task_state None.
Dec 13 04:12:50 compute-0 ceph-mon[75071]: pgmap v1071: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.3 MiB/s wr, 155 op/s
Dec 13 04:12:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 180 op/s
Dec 13 04:12:51 compute-0 nova_compute[243704]: 2025-12-13 04:12:51.657 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:51 compute-0 podman[255702]: 2025-12-13 04:12:51.944077276 +0000 UTC m=+0.093218111 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:12:52 compute-0 nova_compute[243704]: 2025-12-13 04:12:52.077 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015640941581867662 of space, bias 1.0, pg target 0.46922824745602987 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00038730600596390225 of space, bias 1.0, pg target 0.11619180178917067 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.5157130870407987e-07 of space, bias 1.0, pg target 4.547139261122396e-05 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001424478880632682 of space, bias 1.0, pg target 0.42734366418980463 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1997157074071982e-06 of space, bias 4.0, pg target 0.001439658848888638 quantized to 16 (current 16)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:12:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:12:52 compute-0 ceph-mon[75071]: pgmap v1072: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 180 op/s
Dec 13 04:12:52 compute-0 ovn_controller[145204]: 2025-12-13T04:12:52Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e3:9a:66 10.100.0.10
Dec 13 04:12:52 compute-0 ovn_controller[145204]: 2025-12-13T04:12:52Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e3:9a:66 10.100.0.10
Dec 13 04:12:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:12:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836962586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:12:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836962586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 180 op/s
Dec 13 04:12:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1836962586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1836962586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.184 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.186 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.186 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.187 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.187 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.188 243708 INFO nova.compute.manager [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Terminating instance
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.190 243708 DEBUG nova.compute.manager [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:12:54 compute-0 kernel: tap98c694f1-e0 (unregistering): left promiscuous mode
Dec 13 04:12:54 compute-0 NetworkManager[48899]: <info>  [1765599174.2184] device (tap98c694f1-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.231 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 ovn_controller[145204]: 2025-12-13T04:12:54Z|00065|binding|INFO|Releasing lport 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 from this chassis (sb_readonly=0)
Dec 13 04:12:54 compute-0 ovn_controller[145204]: 2025-12-13T04:12:54Z|00066|binding|INFO|Setting lport 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 down in Southbound
Dec 13 04:12:54 compute-0 ovn_controller[145204]: 2025-12-13T04:12:54Z|00067|binding|INFO|Removing iface tap98c694f1-e0 ovn-installed in OVS
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.246 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:bf:7e 10.100.0.12'], port_security=['fa:16:3e:56:bf:7e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'ce7de347-5ab9-49da-8b43-01bcb404b401', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ca3fa59c-a888-49ea-a64c-561713a1429e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c28beacf423b4e2392d93f6083d70ed7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ef548606-30b3-4751-a266-c1ea3f491683', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=88c8df8a-b5c2-45f0-9e7e-631232ad5125, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=98c694f1-e02c-4f04-8cf3-5d4c0673ec79) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.250 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 98c694f1-e02c-4f04-8cf3-5d4c0673ec79 in datapath ca3fa59c-a888-49ea-a64c-561713a1429e unbound from our chassis
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.252 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ca3fa59c-a888-49ea-a64c-561713a1429e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.255 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5839f9e2-a48d-4270-8194-fc7f9e1d8e28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.255 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e namespace which is not needed anymore
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.262 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Dec 13 04:12:54 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 6.571s CPU time.
Dec 13 04:12:54 compute-0 systemd-machined[206767]: Machine qemu-6-instance-00000006 terminated.
Dec 13 04:12:54 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [NOTICE]   (255691) : haproxy version is 2.8.14-c23fe91
Dec 13 04:12:54 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [NOTICE]   (255691) : path to executable is /usr/sbin/haproxy
Dec 13 04:12:54 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [WARNING]  (255691) : Exiting Master process...
Dec 13 04:12:54 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [ALERT]    (255691) : Current worker (255693) exited with code 143 (Terminated)
Dec 13 04:12:54 compute-0 neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e[255687]: [WARNING]  (255691) : All workers exited. Exiting... (0)
Dec 13 04:12:54 compute-0 systemd[1]: libpod-9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a.scope: Deactivated successfully.
Dec 13 04:12:54 compute-0 podman[255751]: 2025-12-13 04:12:54.388518208 +0000 UTC m=+0.051904850 container died 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.408 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.413 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a-userdata-shm.mount: Deactivated successfully.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.425 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.425 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ebfd3a479109f7000a07a460f60bed1b772add079cfc8f1b4ece4fb8498c4f6-merged.mount: Deactivated successfully.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.433 243708 INFO nova.virt.libvirt.driver [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Instance destroyed successfully.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.435 243708 DEBUG nova.objects.instance [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lazy-loading 'resources' on Instance uuid ce7de347-5ab9-49da-8b43-01bcb404b401 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:54 compute-0 podman[255751]: 2025-12-13 04:12:54.440746916 +0000 UTC m=+0.104133548 container cleanup 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.441 243708 DEBUG nova.objects.instance [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:54 compute-0 systemd[1]: libpod-conmon-9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a.scope: Deactivated successfully.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.452 243708 DEBUG nova.virt.libvirt.vif [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:12:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-242264721',display_name='tempest-VolumesActionsTest-instance-242264721',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-242264721',id=6,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:12:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c28beacf423b4e2392d93f6083d70ed7',ramdisk_id='',reservation_id='r-a90gn1od',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1499519509',owner_user_name='tempest-VolumesActionsTest-1499519509-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:12:48Z,user_data=None,user_id='f960834d4908443d9efd683028b08468',uuid=ce7de347-5ab9-49da-8b43-01bcb404b401,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.453 243708 DEBUG nova.network.os_vif_util [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converting VIF {"id": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "address": "fa:16:3e:56:bf:7e", "network": {"id": "ca3fa59c-a888-49ea-a64c-561713a1429e", "bridge": "br-int", "label": "tempest-VolumesActionsTest-658268260-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c28beacf423b4e2392d93f6083d70ed7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98c694f1-e0", "ovs_interfaceid": "98c694f1-e02c-4f04-8cf3-5d4c0673ec79", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.454 243708 DEBUG nova.network.os_vif_util [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.455 243708 DEBUG os_vif [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.458 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.458 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98c694f1-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.462 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.466 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.469 243708 INFO os_vif [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:bf:7e,bridge_name='br-int',has_traffic_filtering=True,id=98c694f1-e02c-4f04-8cf3-5d4c0673ec79,network=Network(ca3fa59c-a888-49ea-a64c-561713a1429e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98c694f1-e0')
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.482 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:54 compute-0 podman[255788]: 2025-12-13 04:12:54.506166731 +0000 UTC m=+0.038900227 container remove 9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.512 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a229910b-9016-4e4e-aeb6-f187f791c4a5]: (4, ('Sat Dec 13 04:12:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e (9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a)\n9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a\nSat Dec 13 04:12:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e (9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a)\n9f27d3eee25beca352a8f31ebd1dd49587a923a3b5ce963d13972388b384895a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.513 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef7faa9-c115-4ff1-9a6d-c09bd964e0cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.514 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapca3fa59c-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 kernel: tapca3fa59c-a0: left promiscuous mode
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.538 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.542 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3feb04e4-d1bf-4e4c-8ec2-a8957c3af647]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.555 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bd0190d8-07fd-4f51-8eab-636cd539d0c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.556 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b5fe18-bc90-45b3-818c-2bb304b48005]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.579 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[27b22867-4e0f-4a74-a981-2ee92c1943d1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 390683, 'reachable_time': 23090, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255818, 'error': None, 'target': 'ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.581 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ca3fa59c-a888-49ea-a64c-561713a1429e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:12:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:12:54.582 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[e26bb8ec-707d-4511-8a57-ebe5b57a6c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 systemd[1]: run-netns-ovnmeta\x2dca3fa59c\x2da888\x2d49ea\x2da64c\x2d561713a1429e.mount: Deactivated successfully.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.668 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.669 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.669 243708 INFO nova.compute.manager [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Attaching volume 04eb82d2-8a67-4e86-8447-d25f6d5d624f to /dev/vdb
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.700 243708 INFO nova.virt.libvirt.driver [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Deleting instance files /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401_del
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.701 243708 INFO nova.virt.libvirt.driver [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Deletion of /var/lib/nova/instances/ce7de347-5ab9-49da-8b43-01bcb404b401_del complete
Dec 13 04:12:54 compute-0 ceph-mon[75071]: pgmap v1073: 305 pgs: 305 active+clean; 359 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.3 MiB/s wr, 180 op/s
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.749 243708 INFO nova.compute.manager [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Took 0.56 seconds to destroy the instance on the hypervisor.
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.750 243708 DEBUG oslo.service.loopingcall [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.750 243708 DEBUG nova.compute.manager [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.751 243708 DEBUG nova.network.neutron [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.781 243708 DEBUG nova.compute.manager [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-unplugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.781 243708 DEBUG oslo_concurrency.lockutils [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.782 243708 DEBUG oslo_concurrency.lockutils [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.782 243708 DEBUG oslo_concurrency.lockutils [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.782 243708 DEBUG nova.compute.manager [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] No waiting events found dispatching network-vif-unplugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.782 243708 DEBUG nova.compute.manager [req-b1391ca9-1f25-4e0a-8d2e-06b6e5916eeb req-cc78846c-80e4-488e-bdb8-44252301466f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-unplugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.803 243708 DEBUG os_brick.utils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.805 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.827 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.827 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1ee8a0e9-3169-46a8-a688-eacd5df763ad]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.830 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.841 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.842 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[10af1885-2e8a-4df2-9507-733f490cf2df]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.843 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.854 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.854 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc74f1a-a3e6-45cb-97b5-ee063b4f8ee2]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.855 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[03e446de-c2c2-4a7d-885a-732cfc27bea1]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.856 243708 DEBUG oslo_concurrency.processutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.876 243708 DEBUG oslo_concurrency.processutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "nvme version" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.881 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.881 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.881 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.882 243708 DEBUG os_brick.utils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] <== get_connector_properties: return (77ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:12:54 compute-0 nova_compute[243704]: 2025-12-13 04:12:54.883 243708 DEBUG nova.virt.block_device [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating existing volume attachment record: be257f69-f6b8-4feb-89da-3f071ba3db55 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:12:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 386 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.4 MiB/s wr, 287 op/s
Dec 13 04:12:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:12:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/470491787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.637 243708 DEBUG nova.network.neutron [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.644 243708 DEBUG nova.objects.instance [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.659 243708 INFO nova.compute.manager [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Took 0.91 seconds to deallocate network for instance.
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.660 243708 DEBUG nova.virt.libvirt.driver [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Attempting to attach volume 04eb82d2-8a67-4e86-8447-d25f6d5d624f with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.667 243708 DEBUG nova.virt.libvirt.guest [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:12:55 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-04eb82d2-8a67-4e86-8447-d25f6d5d624f">
Dec 13 04:12:55 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:12:55 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:55 compute-0 nova_compute[243704]:   <serial>04eb82d2-8a67-4e86-8447-d25f6d5d624f</serial>
Dec 13 04:12:55 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:55 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.695 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.695 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/470491787' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.799 243708 DEBUG oslo_concurrency.processutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.825 243708 DEBUG nova.virt.libvirt.driver [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.826 243708 DEBUG nova.virt.libvirt.driver [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.826 243708 DEBUG nova.virt.libvirt.driver [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:12:55 compute-0 nova_compute[243704]: 2025-12-13 04:12:55.826 243708 DEBUG nova.virt.libvirt.driver [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] No VIF found with MAC fa:16:3e:ee:62:18, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:12:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.038 243708 DEBUG oslo_concurrency.lockutils [None req-6ea9ec87-978b-4930-9995-cb3760ed7dce 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:12:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4167262245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.375 243708 DEBUG oslo_concurrency.processutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.576s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.381 243708 DEBUG nova.compute.provider_tree [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.392 243708 DEBUG nova.scheduler.client.report [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.412 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.438 243708 INFO nova.scheduler.client.report [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Deleted allocations for instance ce7de347-5ab9-49da-8b43-01bcb404b401
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.490 243708 DEBUG oslo_concurrency.lockutils [None req-92e6ceac-cfee-4a7f-818a-ea9618048b20 f960834d4908443d9efd683028b08468 c28beacf423b4e2392d93f6083d70ed7 - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:56 compute-0 ceph-mon[75071]: pgmap v1074: 305 pgs: 305 active+clean; 386 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 4.4 MiB/s wr, 287 op/s
Dec 13 04:12:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4167262245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.879 243708 DEBUG nova.compute.manager [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.879 243708 DEBUG oslo_concurrency.lockutils [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.880 243708 DEBUG oslo_concurrency.lockutils [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.880 243708 DEBUG oslo_concurrency.lockutils [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "ce7de347-5ab9-49da-8b43-01bcb404b401-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.880 243708 DEBUG nova.compute.manager [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] No waiting events found dispatching network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.880 243708 WARNING nova.compute.manager [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received unexpected event network-vif-plugged-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 for instance with vm_state deleted and task_state None.
Dec 13 04:12:56 compute-0 nova_compute[243704]: 2025-12-13 04:12:56.881 243708 DEBUG nova.compute.manager [req-86ecd659-a522-459b-b231-a4eb7fae4e8a req-3203279a-746f-4412-a370-57226fb45556 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Received event network-vif-deleted-98c694f1-e02c-4f04-8cf3-5d4c0673ec79 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:12:57 compute-0 nova_compute[243704]: 2025-12-13 04:12:57.079 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1075: 305 pgs: 305 active+clean; 386 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.198 243708 DEBUG oslo_concurrency.lockutils [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.198 243708 DEBUG oslo_concurrency.lockutils [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.211 243708 INFO nova.compute.manager [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Detaching volume 04eb82d2-8a67-4e86-8447-d25f6d5d624f
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.306 243708 INFO nova.virt.block_device [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Attempting to driver detach volume 04eb82d2-8a67-4e86-8447-d25f6d5d624f from mountpoint /dev/vdb
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.320 243708 DEBUG nova.virt.libvirt.driver [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Attempting to detach device vdb from instance b63480b3-4ed8-4311-8742-e954945bfa74 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.321 243708 DEBUG nova.virt.libvirt.guest [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-04eb82d2-8a67-4e86-8447-d25f6d5d624f">
Dec 13 04:12:58 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <serial>04eb82d2-8a67-4e86-8447-d25f6d5d624f</serial>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:58 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.329 243708 INFO nova.virt.libvirt.driver [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully detached device vdb from instance b63480b3-4ed8-4311-8742-e954945bfa74 from the persistent domain config.
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.330 243708 DEBUG nova.virt.libvirt.driver [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance b63480b3-4ed8-4311-8742-e954945bfa74 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.330 243708 DEBUG nova.virt.libvirt.guest [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-04eb82d2-8a67-4e86-8447-d25f6d5d624f">
Dec 13 04:12:58 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   </source>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <serial>04eb82d2-8a67-4e86-8447-d25f6d5d624f</serial>
Dec 13 04:12:58 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:12:58 compute-0 nova_compute[243704]: </disk>
Dec 13 04:12:58 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.445 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599178.444781, b63480b3-4ed8-4311-8742-e954945bfa74 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.446 243708 DEBUG nova.virt.libvirt.driver [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance b63480b3-4ed8-4311-8742-e954945bfa74 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.449 243708 INFO nova.virt.libvirt.driver [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully detached device vdb from instance b63480b3-4ed8-4311-8742-e954945bfa74 from the live domain config.
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.616 243708 DEBUG nova.objects.instance [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'flavor' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:12:58 compute-0 nova_compute[243704]: 2025-12-13 04:12:58.643 243708 DEBUG oslo_concurrency.lockutils [None req-19a2acf4-c6c7-4360-9593-07468a4ffc89 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:12:58 compute-0 ceph-mon[75071]: pgmap v1075: 305 pgs: 305 active+clean; 386 MiB data, 424 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.1 MiB/s wr, 150 op/s
Dec 13 04:12:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:12:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792430661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:12:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/792430661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:12:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 375 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Dec 13 04:12:59 compute-0 nova_compute[243704]: 2025-12-13 04:12:59.461 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:12:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/792430661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:12:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/792430661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.677 243708 DEBUG nova.compute.manager [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-changed-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.677 243708 DEBUG nova.compute.manager [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing instance network info cache due to event network-changed-eb8d6387-c838-4ece-a475-627751effd8e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.678 243708 DEBUG oslo_concurrency.lockutils [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.678 243708 DEBUG oslo_concurrency.lockutils [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.678 243708 DEBUG nova.network.neutron [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Refreshing network info cache for port eb8d6387-c838-4ece-a475-627751effd8e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.747 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.747 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.747 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.748 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.748 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.749 243708 INFO nova.compute.manager [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Terminating instance
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.749 243708 DEBUG nova.compute.manager [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:13:00 compute-0 ceph-mon[75071]: pgmap v1076: 305 pgs: 305 active+clean; 375 MiB data, 418 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 200 op/s
Dec 13 04:13:00 compute-0 kernel: tapeb8d6387-c8 (unregistering): left promiscuous mode
Dec 13 04:13:00 compute-0 NetworkManager[48899]: <info>  [1765599180.8482] device (tapeb8d6387-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:13:00 compute-0 ovn_controller[145204]: 2025-12-13T04:13:00Z|00068|binding|INFO|Releasing lport eb8d6387-c838-4ece-a475-627751effd8e from this chassis (sb_readonly=0)
Dec 13 04:13:00 compute-0 ovn_controller[145204]: 2025-12-13T04:13:00Z|00069|binding|INFO|Setting lport eb8d6387-c838-4ece-a475-627751effd8e down in Southbound
Dec 13 04:13:00 compute-0 ovn_controller[145204]: 2025-12-13T04:13:00Z|00070|binding|INFO|Removing iface tapeb8d6387-c8 ovn-installed in OVS
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.855 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.860 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:62:18 10.100.0.4'], port_security=['fa:16:3e:ee:62:18 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'b63480b3-4ed8-4311-8742-e954945bfa74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67177602579c40c98ca16df63bff5934', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf55ee30-2a30-425f-af3c-50a725a59497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15f16d90-5305-4b52-8186-db63310acee6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=eb8d6387-c838-4ece-a475-627751effd8e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.861 154842 INFO neutron.agent.ovn.metadata.agent [-] Port eb8d6387-c838-4ece-a475-627751effd8e in datapath 6acff72d-3b46-4d95-b32d-8f79ce87caf9 unbound from our chassis
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.862 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 6acff72d-3b46-4d95-b32d-8f79ce87caf9
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.872 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.879 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6079f704-75fa-44da-b3c3-f02b117fc471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec 13 04:13:00 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 14.545s CPU time.
Dec 13 04:13:00 compute-0 systemd-machined[206767]: Machine qemu-4-instance-00000004 terminated.
Dec 13 04:13:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.903 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.904 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.907 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[560cf91b-e287-4f67-bef3-c984e3b383e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.910 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b8a6f53e-acc4-4b47-b7db-79cd36b7096d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.922 243708 DEBUG nova.objects.instance [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'flavor' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.936 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[3ef83024-2761-4cdd-80f6-516af48fa498]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.951 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[09d0509d-5bc1-4d1b-bdcc-98f2028360e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap6acff72d-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9a:c5:87'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384085, 'reachable_time': 26332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255883, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.954 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.966 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[20d5a671-7a63-4829-bb40-c893f61adddf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap6acff72d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384098, 'tstamp': 384098}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255884, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap6acff72d-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 384101, 'tstamp': 384101}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255884, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.968 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6acff72d-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.969 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.976 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.977 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6acff72d-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.977 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.978 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap6acff72d-30, col_values=(('external_ids', {'iface-id': '09de48ad-091f-4941-8093-f0d00d05e24a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:00 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:00.978 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.984 243708 INFO nova.virt.libvirt.driver [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Instance destroyed successfully.
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.985 243708 DEBUG nova.objects.instance [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'resources' on Instance uuid b63480b3-4ed8-4311-8742-e954945bfa74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:00 compute-0 nova_compute[243704]: 2025-12-13 04:13:00.999 243708 DEBUG nova.virt.libvirt.vif [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:12:21Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-499670233',display_name='tempest-TestStampPattern-server-499670233',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-499670233',id=4,image_ref='89bd4ded-0d1d-43c0-8889-725d21f3df99',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:12:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-jsx7cl2k',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_boot_roles='reader,member',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='b050eb13-af7e-4bd1-88e6-fcb2d100ffc8',image_min_disk='1',image_min_ram='0',image_owner_id='67177602579c40c98ca16df63bff5934',image_owner_project_name='tempest-TestStampPattern-102097859',image_owner_user_name='tempest-TestStampPattern-102097859-project-member',image_user_id='3a8b8802dc27428e82af3cfee6d31fa0',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:12:31Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b63480b3-4ed8-4311-8742-e954945bfa74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.000 243708 DEBUG nova.network.os_vif_util [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.209", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.001 243708 DEBUG nova.network.os_vif_util [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.001 243708 DEBUG os_vif [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.003 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.003 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeb8d6387-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.006 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.009 243708 INFO os_vif [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ee:62:18,bridge_name='br-int',has_traffic_filtering=True,id=eb8d6387-c838-4ece-a475-627751effd8e,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapeb8d6387-c8')
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.207 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.208 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.209 243708 INFO nova.compute.manager [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Attaching volume 9c4b11b7-a884-40a0-8483-6150e13b121b to /dev/vdb
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.233 243708 INFO nova.virt.libvirt.driver [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Deleting instance files /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74_del
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.234 243708 INFO nova.virt.libvirt.driver [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Deletion of /var/lib/nova/instances/b63480b3-4ed8-4311-8742-e954945bfa74_del complete
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.298 243708 INFO nova.compute.manager [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Took 0.55 seconds to destroy the instance on the hypervisor.
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.299 243708 DEBUG oslo.service.loopingcall [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.304 243708 DEBUG nova.compute.manager [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.304 243708 DEBUG nova.network.neutron [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.369 243708 DEBUG os_brick.utils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.370 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.381 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.381 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[f82028ca-f2b7-47e5-9102-4191eed6e6a7]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.383 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.392 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.392 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[8052ee6c-71b7-4a02-8bf8-030cd3d64788]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.393 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.406 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.406 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e197ec55-262b-4a09-9b4a-6a34e4f67c97]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.408 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4227edd2-24a3-4edb-a0e5-ca9f336cbfe3]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.408 243708 DEBUG oslo_concurrency.processutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 347 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.432 243708 DEBUG oslo_concurrency.processutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.434 243708 DEBUG os_brick.initiator.connectors.lightos [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.435 243708 DEBUG os_brick.initiator.connectors.lightos [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.435 243708 DEBUG os_brick.initiator.connectors.lightos [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.435 243708 DEBUG os_brick.utils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:13:01 compute-0 nova_compute[243704]: 2025-12-13 04:13:01.436 243708 DEBUG nova.virt.block_device [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updating existing volume attachment record: 3aed898c-a2eb-4d04-8af3-6a8966c573c6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:13:01 compute-0 ceph-mon[75071]: pgmap v1077: 305 pgs: 305 active+clean; 347 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.3 MiB/s wr, 195 op/s
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.081 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.101 243708 DEBUG nova.network.neutron [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.122 243708 INFO nova.compute.manager [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Took 0.82 seconds to deallocate network for instance.
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.179 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.180 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.217 243708 DEBUG nova.compute.manager [req-ce9b7ba7-7427-46bd-80b3-b6ba9a06b9e1 req-ce45abaf-b786-4206-abe3-a1187d785189 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-vif-deleted-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.226 243708 DEBUG nova.network.neutron [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updated VIF entry in instance network info cache for port eb8d6387-c838-4ece-a475-627751effd8e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.227 243708 DEBUG nova.network.neutron [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Updating instance_info_cache with network_info: [{"id": "eb8d6387-c838-4ece-a475-627751effd8e", "address": "fa:16:3e:ee:62:18", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapeb8d6387-c8", "ovs_interfaceid": "eb8d6387-c838-4ece-a475-627751effd8e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/618426527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.246 243708 DEBUG oslo_concurrency.lockutils [req-adfba340-40ac-4d3c-8fbc-a24c1baff04b req-fe8e021a-3da9-4d2c-a3bf-6e040d86169f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b63480b3-4ed8-4311-8742-e954945bfa74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.301 243708 DEBUG oslo_concurrency.processutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567850440' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1567850440' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941966300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/618426527' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1567850440' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1567850440' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.866 243708 DEBUG oslo_concurrency.processutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.872 243708 DEBUG nova.compute.provider_tree [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:13:02 compute-0 nova_compute[243704]: 2025-12-13 04:13:02.891 243708 DEBUG nova.scheduler.client.report [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:13:02 compute-0 podman[255943]: 2025-12-13 04:13:02.911499633 +0000 UTC m=+0.055214279 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.128 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.949s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.178 243708 INFO nova.scheduler.client.report [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Deleted allocations for instance b63480b3-4ed8-4311-8742-e954945bfa74
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.216 243708 DEBUG nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-vif-unplugged-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.217 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.217 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.217 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.217 243708 DEBUG nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] No waiting events found dispatching network-vif-unplugged-eb8d6387-c838-4ece-a475-627751effd8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 WARNING nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received unexpected event network-vif-unplugged-eb8d6387-c838-4ece-a475-627751effd8e for instance with vm_state deleted and task_state None.
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 DEBUG nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 DEBUG oslo_concurrency.lockutils [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 DEBUG nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] No waiting events found dispatching network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.218 243708 WARNING nova.compute.manager [req-fe1e320e-6d11-483d-b32d-d045cacf17c7 req-6e9fe39d-4b2b-4c2b-b2aa-a4ea91c0f327 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Received unexpected event network-vif-plugged-eb8d6387-c838-4ece-a475-627751effd8e for instance with vm_state deleted and task_state None.
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.276 243708 DEBUG oslo_concurrency.lockutils [None req-e75f4317-a501-4fa6-98de-25e664cd623c 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b63480b3-4ed8-4311-8742-e954945bfa74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.301 243708 DEBUG os_brick.encryptors [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Using volume encryption metadata '{'encryption_key_id': '66375dfc-09a4-4290-827b-0caaf415b0b5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9c4b11b7-a884-40a0-8483-6150e13b121b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9c4b11b7-a884-40a0-8483-6150e13b121b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '229ab4a4-03ac-4686-bd94-9b1def9ec619', 'attached_at': '', 'detached_at': '', 'volume_id': '9c4b11b7-a884-40a0-8483-6150e13b121b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.307 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.321 243708 DEBUG barbicanclient.v1.secrets [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/66375dfc-09a4-4290-827b-0caaf415b0b5 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.322 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.354 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.355 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.376 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.377 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.402 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.402 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 347 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 170 op/s
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.421 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.422 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.449 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.449 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.474 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.475 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.499 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.500 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.524 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.524 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.548 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.548 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.574 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.574 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.610 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.611 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.630 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.631 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.658 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.659 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.689 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.690 243708 INFO barbicanclient.base [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Calculated Secrets uuid ref: secrets/66375dfc-09a4-4290-827b-0caaf415b0b5
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.712 243708 DEBUG barbicanclient.client [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.713 243708 DEBUG nova.virt.libvirt.host [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:13:03 compute-0 nova_compute[243704]:     <volume>9c4b11b7-a884-40a0-8483-6150e13b121b</volume>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:13:03 compute-0 nova_compute[243704]: </secret>
Dec 13 04:13:03 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.725 243708 DEBUG nova.objects.instance [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'flavor' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.744 243708 DEBUG nova.virt.libvirt.driver [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Attempting to attach volume 9c4b11b7-a884-40a0-8483-6150e13b121b with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:13:03 compute-0 nova_compute[243704]: 2025-12-13 04:13:03.748 243708 DEBUG nova.virt.libvirt.guest [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9c4b11b7-a884-40a0-8483-6150e13b121b">
Dec 13 04:13:03 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   </source>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:13:03 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <serial>9c4b11b7-a884-40a0-8483-6150e13b121b</serial>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:13:03 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="a0412370-f960-4331-824a-348b7ef9d85c"/>
Dec 13 04:13:03 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:13:03 compute-0 nova_compute[243704]: </disk>
Dec 13 04:13:03 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:13:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/941966300' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:03 compute-0 ceph-mon[75071]: pgmap v1078: 305 pgs: 305 active+clean; 347 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 170 op/s
Dec 13 04:13:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/579796145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/579796145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/579796145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/579796145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134464604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/134464604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 330 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 218 op/s
Dec 13 04:13:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Dec 13 04:13:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Dec 13 04:13:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Dec 13 04:13:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/134464604' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/134464604' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:06 compute-0 ceph-mon[75071]: pgmap v1079: 305 pgs: 305 active+clean; 330 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 218 op/s
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.297 243708 DEBUG nova.virt.libvirt.driver [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.298 243708 DEBUG nova.virt.libvirt.driver [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.298 243708 DEBUG nova.virt.libvirt.driver [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.298 243708 DEBUG nova.virt.libvirt.driver [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] No VIF found with MAC fa:16:3e:e3:9a:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:13:06 compute-0 nova_compute[243704]: 2025-12-13 04:13:06.491 243708 DEBUG oslo_concurrency.lockutils [None req-52e4030d-fdb6-4632-b003-46fcab226ec3 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.283s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.084 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Dec 13 04:13:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Dec 13 04:13:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Dec 13 04:13:07 compute-0 ceph-mon[75071]: osdmap e208: 3 total, 3 up, 3 in
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.294 243708 DEBUG oslo_concurrency.lockutils [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.294 243708 DEBUG oslo_concurrency.lockutils [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.316 243708 INFO nova.compute.manager [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Detaching volume 9c4b11b7-a884-40a0-8483-6150e13b121b
Dec 13 04:13:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 330 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 24 KiB/s wr, 91 op/s
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.467 243708 INFO nova.virt.block_device [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Attempting to driver detach volume 9c4b11b7-a884-40a0-8483-6150e13b121b from mountpoint /dev/vdb
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.622 243708 DEBUG os_brick.encryptors [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Using volume encryption metadata '{'encryption_key_id': '66375dfc-09a4-4290-827b-0caaf415b0b5', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-9c4b11b7-a884-40a0-8483-6150e13b121b', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '9c4b11b7-a884-40a0-8483-6150e13b121b', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '229ab4a4-03ac-4686-bd94-9b1def9ec619', 'attached_at': '', 'detached_at': '', 'volume_id': '9c4b11b7-a884-40a0-8483-6150e13b121b', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.630 243708 DEBUG nova.virt.libvirt.driver [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Attempting to detach device vdb from instance 229ab4a4-03ac-4686-bd94-9b1def9ec619 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.631 243708 DEBUG nova.virt.libvirt.guest [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9c4b11b7-a884-40a0-8483-6150e13b121b">
Dec 13 04:13:07 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   </source>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <serial>9c4b11b7-a884-40a0-8483-6150e13b121b</serial>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:13:07 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="a0412370-f960-4331-824a-348b7ef9d85c"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:13:07 compute-0 nova_compute[243704]: </disk>
Dec 13 04:13:07 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.638 243708 INFO nova.virt.libvirt.driver [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Successfully detached device vdb from instance 229ab4a4-03ac-4686-bd94-9b1def9ec619 from the persistent domain config.
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.639 243708 DEBUG nova.virt.libvirt.driver [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 229ab4a4-03ac-4686-bd94-9b1def9ec619 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.639 243708 DEBUG nova.virt.libvirt.guest [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9c4b11b7-a884-40a0-8483-6150e13b121b">
Dec 13 04:13:07 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   </source>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <serial>9c4b11b7-a884-40a0-8483-6150e13b121b</serial>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:13:07 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="a0412370-f960-4331-824a-348b7ef9d85c"/>
Dec 13 04:13:07 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:13:07 compute-0 nova_compute[243704]: </disk>
Dec 13 04:13:07 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.688 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599187.6879482, 229ab4a4-03ac-4686-bd94-9b1def9ec619 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.690 243708 DEBUG nova.virt.libvirt.driver [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 229ab4a4-03ac-4686-bd94-9b1def9ec619 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.693 243708 INFO nova.virt.libvirt.driver [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Successfully detached device vdb from instance 229ab4a4-03ac-4686-bd94-9b1def9ec619 from the live domain config.
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.882 243708 DEBUG nova.objects.instance [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'flavor' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:07 compute-0 nova_compute[243704]: 2025-12-13 04:13:07.918 243708 DEBUG oslo_concurrency.lockutils [None req-5dab5a73-9052-4120-9886-a3205d651855 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:08 compute-0 ceph-mon[75071]: osdmap e209: 3 total, 3 up, 3 in
Dec 13 04:13:08 compute-0 ceph-mon[75071]: pgmap v1082: 305 pgs: 305 active+clean; 330 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 24 KiB/s wr, 91 op/s
Dec 13 04:13:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627422200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2627422200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.896 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.896 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.897 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.897 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.898 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.900 243708 INFO nova.compute.manager [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Terminating instance
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.902 243708 DEBUG nova.compute.manager [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.904 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.904 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.905 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.905 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.906 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.907 243708 INFO nova.compute.manager [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Terminating instance
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.909 243708 DEBUG nova.compute.manager [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:13:08 compute-0 kernel: tapaa0f542c-09 (unregistering): left promiscuous mode
Dec 13 04:13:08 compute-0 kernel: tap6c5cfc53-a7 (unregistering): left promiscuous mode
Dec 13 04:13:08 compute-0 NetworkManager[48899]: <info>  [1765599188.9637] device (tapaa0f542c-09): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:13:08 compute-0 NetworkManager[48899]: <info>  [1765599188.9667] device (tap6c5cfc53-a7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00071|binding|INFO|Releasing lport aa0f542c-094e-48e7-9320-5384b5d4939f from this chassis (sb_readonly=0)
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00072|binding|INFO|Setting lport aa0f542c-094e-48e7-9320-5384b5d4939f down in Southbound
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00073|binding|INFO|Releasing lport 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b from this chassis (sb_readonly=0)
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00074|binding|INFO|Setting lport 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b down in Southbound
Dec 13 04:13:08 compute-0 nova_compute[243704]: 2025-12-13 04:13:08.977 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00075|binding|INFO|Removing iface tapaa0f542c-09 ovn-installed in OVS
Dec 13 04:13:08 compute-0 ovn_controller[145204]: 2025-12-13T04:13:08Z|00076|binding|INFO|Removing iface tap6c5cfc53-a7 ovn-installed in OVS
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.017 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 17.099s CPU time.
Dec 13 04:13:09 compute-0 systemd-machined[206767]: Machine qemu-5-instance-00000005 terminated.
Dec 13 04:13:09 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 16.606s CPU time.
Dec 13 04:13:09 compute-0 systemd-machined[206767]: Machine qemu-3-instance-00000003 terminated.
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.113 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e3:9a:66 10.100.0.10'], port_security=['fa:16:3e:e3:9a:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '229ab4a4-03ac-4686-bd94-9b1def9ec619', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c6243b53-29fa-418e-8550-3cbf311cc62c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3d5c68f771584a2e96300880848d9aac', 'neutron:revision_number': '4', 'neutron:security_group_ids': '351f9ce1-80fd-4c08-ab4a-72ebe95ba7f8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bdb2c1ea-01c6-49bd-824b-5d10b545a135, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=aa0f542c-094e-48e7-9320-5384b5d4939f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.115 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:9f:6c 10.100.0.7'], port_security=['fa:16:3e:ab:9f:6c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'b050eb13-af7e-4bd1-88e6-fcb2d100ffc8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '67177602579c40c98ca16df63bff5934', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cf55ee30-2a30-425f-af3c-50a725a59497', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=15f16d90-5305-4b52-8186-db63310acee6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.116 154842 INFO neutron.agent.ovn.metadata.agent [-] Port aa0f542c-094e-48e7-9320-5384b5d4939f in datapath c6243b53-29fa-418e-8550-3cbf311cc62c unbound from our chassis
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.119 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c6243b53-29fa-418e-8550-3cbf311cc62c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.120 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[44408016-5800-4396-9091-579cceddc93b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.120 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c namespace which is not needed anymore
Dec 13 04:13:09 compute-0 systemd-udevd[255992]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:13:09 compute-0 NetworkManager[48899]: <info>  [1765599189.1307] manager: (tap6c5cfc53-a7): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.135 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.151 243708 DEBUG nova.compute.manager [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.151 243708 DEBUG nova.compute.manager [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing instance network info cache due to event network-changed-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.152 243708 DEBUG oslo_concurrency.lockutils [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.153 243708 DEBUG oslo_concurrency.lockutils [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.153 243708 DEBUG nova.network.neutron [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Refreshing network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.155 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.158 243708 INFO nova.virt.libvirt.driver [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Instance destroyed successfully.
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.159 243708 DEBUG nova.objects.instance [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lazy-loading 'resources' on Instance uuid b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.164 243708 INFO nova.virt.libvirt.driver [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Instance destroyed successfully.
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.164 243708 DEBUG nova.objects.instance [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lazy-loading 'resources' on Instance uuid 229ab4a4-03ac-4686-bd94-9b1def9ec619 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.205 243708 DEBUG nova.virt.libvirt.vif [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:11:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestStampPattern-server-260123093',display_name='tempest-TestStampPattern-server-260123093',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-teststamppattern-server-260123093',id=3,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFM0MU4m92JGsf1a8yXapvFc8NsDZ1Q8MKW+4lJiaibX0u2gJl9+eGG5v/UGq6eQNTuIoD3j4ZepFXbz7/CNW041TuPFq0GKtdS7b3wHX/PQosItTXgdUwOaQctvP0U/Kg==',key_name='tempest-TestStampPattern-343017512',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:11:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='67177602579c40c98ca16df63bff5934',ramdisk_id='',reservation_id='r-0i8c4fqk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestStampPattern-102097859',owner_user_name='tempest-TestStampPattern-102097859-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:12:17Z,user_data=None,user_id='3a8b8802dc27428e82af3cfee6d31fa0',uuid=b050eb13-af7e-4bd1-88e6-fcb2d100ffc8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.205 243708 DEBUG nova.network.os_vif_util [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converting VIF {"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.206 243708 DEBUG nova.network.os_vif_util [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.206 243708 DEBUG os_vif [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.210 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.210 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6c5cfc53-a7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.211 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.213 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.216 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.219 243708 INFO os_vif [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:9f:6c,bridge_name='br-int',has_traffic_filtering=True,id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b,network=Network(6acff72d-3b46-4d95-b32d-8f79ce87caf9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6c5cfc53-a7')
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.240 243708 DEBUG nova.virt.libvirt.vif [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:12:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-213628157',display_name='tempest-TestEncryptedCinderVolumes-server-213628157',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-213628157',id=5,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFe1neUOmXvlN7mo/X/VaeV26IwttYxfT8v2SqaPs42uBESvxuJP4y7d51l+slJFM6+MMjuxdFlG0Cx1rHp3JP6TcqS5LxR7Tv6ybWdAEHIhn9jig3p1gj4C5ttTqa1FZA==',key_name='tempest-keypair-1182106795',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3d5c68f771584a2e96300880848d9aac',ramdisk_id='',reservation_id='r-w2n6xgdf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1229723829',owner_user_name='tempest-TestEncryptedCinderVolumes-1229723829-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:12:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='a4e44b54d008406396250df8425c1b48',uuid=229ab4a4-03ac-4686-bd94-9b1def9ec619,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.241 243708 DEBUG nova.network.os_vif_util [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converting VIF {"id": "aa0f542c-094e-48e7-9320-5384b5d4939f", "address": "fa:16:3e:e3:9a:66", "network": {"id": "c6243b53-29fa-418e-8550-3cbf311cc62c", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1551791105-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3d5c68f771584a2e96300880848d9aac", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapaa0f542c-09", "ovs_interfaceid": "aa0f542c-094e-48e7-9320-5384b5d4939f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.241 243708 DEBUG nova.network.os_vif_util [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.242 243708 DEBUG os_vif [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.243 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.243 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaa0f542c-09, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.244 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.245 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.248 243708 INFO os_vif [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e3:9a:66,bridge_name='br-int',has_traffic_filtering=True,id=aa0f542c-094e-48e7-9320-5384b5d4939f,network=Network(c6243b53-29fa-418e-8550-3cbf311cc62c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapaa0f542c-09')
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [NOTICE]   (255206) : haproxy version is 2.8.14-c23fe91
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [NOTICE]   (255206) : path to executable is /usr/sbin/haproxy
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [WARNING]  (255206) : Exiting Master process...
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [ALERT]    (255206) : Current worker (255208) exited with code 143 (Terminated)
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c[255201]: [WARNING]  (255206) : All workers exited. Exiting... (0)
Dec 13 04:13:09 compute-0 systemd[1]: libpod-0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 podman[256034]: 2025-12-13 04:13:09.26269812 +0000 UTC m=+0.049447583 container died 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a-userdata-shm.mount: Deactivated successfully.
Dec 13 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-ec5f0f0001d7c4282644cd04b9c476ae210667f51a4cc71f27fea2f42f06f16e-merged.mount: Deactivated successfully.
Dec 13 04:13:09 compute-0 podman[256034]: 2025-12-13 04:13:09.300863376 +0000 UTC m=+0.087612839 container cleanup 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:13:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2627422200' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2627422200' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:09 compute-0 systemd[1]: libpod-conmon-0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 podman[256099]: 2025-12-13 04:13:09.367210187 +0000 UTC m=+0.045758383 container remove 0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.374 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[47c41036-5828-4c8d-b8e9-a3f1a4073339]: (4, ('Sat Dec 13 04:13:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c (0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a)\n0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a\nSat Dec 13 04:13:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c (0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a)\n0b3b3da1bdd58c69852a334a1ed520b5a33b8cabc70cb9223331a4b8825b906a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.376 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e85210-dc1e-457d-9913-b6c0cfa65050]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.377 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6243b53-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.379 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 kernel: tapc6243b53-20: left promiscuous mode
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.394 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.400 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9b5ad1-2442-445c-866c-8d27892cbec6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.413 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f34ebb6d-2294-4f7b-b6d9-579c84ddb3bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.415 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f34f10-c3bd-438b-a9fc-21b80f1bc7a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 31 KiB/s wr, 218 op/s
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.427 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599174.4257011, ce7de347-5ab9-49da-8b43-01bcb404b401 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.428 243708 INFO nova.compute.manager [-] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] VM Stopped (Lifecycle Event)
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.437 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8c747b60-b7a7-485f-955a-82cdb2ec41d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 389232, 'reachable_time': 17099, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256116, 'error': None, 'target': 'ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 systemd[1]: run-netns-ovnmeta\x2dc6243b53\x2d29fa\x2d418e\x2d8550\x2d3cbf311cc62c.mount: Deactivated successfully.
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.440 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c6243b53-29fa-418e-8550-3cbf311cc62c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.440 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[fba7f27f-62c7-4f16-86dc-7a221c0aebc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.442 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b in datapath 6acff72d-3b46-4d95-b32d-8f79ce87caf9 unbound from our chassis
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.443 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 6acff72d-3b46-4d95-b32d-8f79ce87caf9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.444 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f28e4a0f-0710-42e8-8d4a-56e7bd5c88ad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.444 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9 namespace which is not needed anymore
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.452 243708 DEBUG nova.compute.manager [None req-d440a049-7118-4188-bf6a-5485a20b8b4d - - - - - -] [instance: ce7de347-5ab9-49da-8b43-01bcb404b401] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.479 243708 INFO nova.virt.libvirt.driver [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Deleting instance files /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_del
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.480 243708 INFO nova.virt.libvirt.driver [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Deletion of /var/lib/nova/instances/b050eb13-af7e-4bd1-88e6-fcb2d100ffc8_del complete
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.504 243708 INFO nova.virt.libvirt.driver [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Deleting instance files /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619_del
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.505 243708 INFO nova.virt.libvirt.driver [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Deletion of /var/lib/nova/instances/229ab4a4-03ac-4686-bd94-9b1def9ec619_del complete
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.555 243708 INFO nova.compute.manager [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Took 0.65 seconds to destroy the instance on the hypervisor.
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.556 243708 DEBUG oslo.service.loopingcall [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.556 243708 DEBUG nova.compute.manager [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.556 243708 DEBUG nova.network.neutron [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [NOTICE]   (253419) : haproxy version is 2.8.14-c23fe91
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [NOTICE]   (253419) : path to executable is /usr/sbin/haproxy
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [WARNING]  (253419) : Exiting Master process...
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [ALERT]    (253419) : Current worker (253421) exited with code 143 (Terminated)
Dec 13 04:13:09 compute-0 neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9[253415]: [WARNING]  (253419) : All workers exited. Exiting... (0)
Dec 13 04:13:09 compute-0 systemd[1]: libpod-13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 conmon[253415]: conmon 13c947708454ce9b656e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf.scope/container/memory.events
Dec 13 04:13:09 compute-0 podman[256134]: 2025-12-13 04:13:09.572799227 +0000 UTC m=+0.045889746 container died 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.572 243708 INFO nova.compute.manager [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Took 0.67 seconds to destroy the instance on the hypervisor.
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.573 243708 DEBUG oslo.service.loopingcall [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.574 243708 DEBUG nova.compute.manager [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.574 243708 DEBUG nova.network.neutron [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf-userdata-shm.mount: Deactivated successfully.
Dec 13 04:13:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe75d083890ef5444a8a94978e800418f4758366f843964c01f2bd08958cd50c-merged.mount: Deactivated successfully.
Dec 13 04:13:09 compute-0 podman[256134]: 2025-12-13 04:13:09.604193309 +0000 UTC m=+0.077283828 container cleanup 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:13:09 compute-0 systemd[1]: libpod-conmon-13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf.scope: Deactivated successfully.
Dec 13 04:13:09 compute-0 podman[256163]: 2025-12-13 04:13:09.666625094 +0000 UTC m=+0.041024905 container remove 13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.672 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d0a7523f-c7ca-4864-a1c6-67dca9c798c8]: (4, ('Sat Dec 13 04:13:09 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9 (13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf)\n13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf\nSat Dec 13 04:13:09 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9 (13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf)\n13c947708454ce9b656e1c172ac9b485b474cc8d0597236ff771d51a002d50bf\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.674 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d072292b-19b9-4cac-96f5-defbcb9ce212]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.675 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6acff72d-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.677 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 kernel: tap6acff72d-30: left promiscuous mode
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.693 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.696 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[25fe9460-8e6b-4c90-8294-40be99f77650]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.711 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3ccc0d44-983d-4937-bc53-5a8a4b87ee99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.713 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec161c2-2e52-4c52-b6f6-a80147bf48c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.730 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[03145bca-442e-45d2-873b-95a72cfacd1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 384077, 'reachable_time': 42481, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 256178, 'error': None, 'target': 'ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.732 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-6acff72d-3b46-4d95-b32d-8f79ce87caf9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:13:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:09.732 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[b51ec5ef-6d3d-4ec5-ac45-310eb2693340]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:13:09 compute-0 nova_compute[243704]: 2025-12-13 04:13:09.899 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.064 243708 DEBUG nova.compute.manager [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-unplugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.064 243708 DEBUG oslo_concurrency.lockutils [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.065 243708 DEBUG oslo_concurrency.lockutils [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.065 243708 DEBUG oslo_concurrency.lockutils [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.065 243708 DEBUG nova.compute.manager [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] No waiting events found dispatching network-vif-unplugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.065 243708 DEBUG nova.compute.manager [req-21e54ed7-e647-440c-9be1-31fe1fc400b8 req-93727d62-8751-45ed-b92d-0614c7b90c19 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-unplugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:13:10 compute-0 systemd[1]: run-netns-ovnmeta\x2d6acff72d\x2d3b46\x2d4d95\x2db32d\x2d8f79ce87caf9.mount: Deactivated successfully.
Dec 13 04:13:10 compute-0 ceph-mon[75071]: pgmap v1083: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 188 KiB/s rd, 31 KiB/s wr, 218 op/s
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.524 243708 DEBUG nova.network.neutron [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.540 243708 INFO nova.compute.manager [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Took 0.98 seconds to deallocate network for instance.
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.591 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.591 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.604 243708 DEBUG nova.network.neutron [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.625 243708 INFO nova.compute.manager [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Took 1.05 seconds to deallocate network for instance.
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.670 243708 DEBUG oslo_concurrency.processutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.695 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.914 243708 DEBUG nova.network.neutron [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updated VIF entry in instance network info cache for port 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:13:10 compute-0 nova_compute[243704]: 2025-12-13 04:13:10.915 243708 DEBUG nova.network.neutron [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [{"id": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "address": "fa:16:3e:ab:9f:6c", "network": {"id": "6acff72d-3b46-4d95-b32d-8f79ce87caf9", "bridge": "br-int", "label": "tempest-TestStampPattern-557069271-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "67177602579c40c98ca16df63bff5934", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6c5cfc53-a7", "ovs_interfaceid": "6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.099 243708 DEBUG oslo_concurrency.lockutils [req-8c44cb7f-ee07-4cad-a796-4a7d12e89ec2 req-2556ecdb-b514-49e0-a897-8fa37cdd0d1e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:13:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.368 243708 DEBUG nova.compute.manager [req-188ec89c-ee17-4b76-a221-029ad04c3ff5 req-b6c8cf5b-5393-43e4-adf2-bdd9d9896685 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-deleted-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.369 243708 INFO nova.compute.manager [req-188ec89c-ee17-4b76-a221-029ad04c3ff5 req-b6c8cf5b-5393-43e4-adf2-bdd9d9896685 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Neutron deleted interface 6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b; detaching it from the instance and deleting it from the info cache
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.369 243708 DEBUG nova.network.neutron [req-188ec89c-ee17-4b76-a221-029ad04c3ff5 req-b6c8cf5b-5393-43e4-adf2-bdd9d9896685 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:13:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Dec 13 04:13:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1267706875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.388 243708 DEBUG nova.compute.manager [req-188ec89c-ee17-4b76-a221-029ad04c3ff5 req-b6c8cf5b-5393-43e4-adf2-bdd9d9896685 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Detach interface failed, port_id=6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b, reason: Instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.406 243708 DEBUG oslo_concurrency.processutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.736s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.413 243708 DEBUG nova.compute.provider_tree [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:13:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 11 KiB/s wr, 193 op/s
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.425 243708 DEBUG nova.scheduler.client.report [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.452 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.860s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.455 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.505 243708 INFO nova.scheduler.client.report [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Deleted allocations for instance b050eb13-af7e-4bd1-88e6-fcb2d100ffc8
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.518 243708 DEBUG oslo_concurrency.processutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:11 compute-0 nova_compute[243704]: 2025-12-13 04:13:11.575 243708 DEBUG oslo_concurrency.lockutils [None req-b34193b3-83e5-42f8-a6f0-7301d9d4822e 3a8b8802dc27428e82af3cfee6d31fa0 67177602579c40c98ca16df63bff5934 - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:11 compute-0 podman[256221]: 2025-12-13 04:13:11.940979079 +0000 UTC m=+0.082487240 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320791193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.051 243708 DEBUG oslo_concurrency.processutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.056 243708 DEBUG nova.compute.provider_tree [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.070 243708 DEBUG nova.scheduler.client.report [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.084 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.097 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.642s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.099 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.100 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.100 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.101 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.171 243708 INFO nova.scheduler.client.report [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Deleted allocations for instance 229ab4a4-03ac-4686-bd94-9b1def9ec619
Dec 13 04:13:12 compute-0 ceph-mon[75071]: osdmap e210: 3 total, 3 up, 3 in
Dec 13 04:13:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1267706875' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:12 compute-0 ceph-mon[75071]: pgmap v1085: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 182 KiB/s rd, 11 KiB/s wr, 193 op/s
Dec 13 04:13:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/320791193' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.191 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.192 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.192 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.192 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "b050eb13-af7e-4bd1-88e6-fcb2d100ffc8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.192 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] No waiting events found dispatching network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.193 243708 WARNING nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Received unexpected event network-vif-plugged-6c5cfc53-a7ee-4a69-9a8b-dad3ffe9650b for instance with vm_state deleted and task_state None.
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.193 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-vif-unplugged-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.193 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.193 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.194 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.194 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] No waiting events found dispatching network-vif-unplugged-aa0f542c-094e-48e7-9320-5384b5d4939f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.194 243708 WARNING nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received unexpected event network-vif-unplugged-aa0f542c-094e-48e7-9320-5384b5d4939f for instance with vm_state deleted and task_state None.
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.194 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.195 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.195 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.195 243708 DEBUG oslo_concurrency.lockutils [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.195 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] No waiting events found dispatching network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.195 243708 WARNING nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received unexpected event network-vif-plugged-aa0f542c-094e-48e7-9320-5384b5d4939f for instance with vm_state deleted and task_state None.
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.196 243708 DEBUG nova.compute.manager [req-62c17b9e-7d58-49e3-a2e8-db8050ad196f req-6a4baf6d-ce9d-47ee-b9ad-424fb7ea39a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Received event network-vif-deleted-aa0f542c-094e-48e7-9320-5384b5d4939f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.265 243708 DEBUG oslo_concurrency.lockutils [None req-9a8575c7-f54e-40e2-855f-4be4e04a8341 a4e44b54d008406396250df8425c1b48 3d5c68f771584a2e96300880848d9aac - - default default] Lock "229ab4a4-03ac-4686-bd94-9b1def9ec619" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/894071265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.646 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965663493' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3965663493' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.840 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.842 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4555MB free_disk=59.916440000757575GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.842 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.843 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.891 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.892 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:13:12 compute-0 nova_compute[243704]: 2025-12-13 04:13:12.910 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904432830' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3904432830' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/894071265' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3965663493' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3965663493' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3904432830' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3904432830' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 9.1 KiB/s wr, 162 op/s
Dec 13 04:13:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:13:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/91686733' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:13 compute-0 nova_compute[243704]: 2025-12-13 04:13:13.469 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:13:13 compute-0 nova_compute[243704]: 2025-12-13 04:13:13.475 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:13:13 compute-0 nova_compute[243704]: 2025-12-13 04:13:13.490 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:13:13 compute-0 nova_compute[243704]: 2025-12-13 04:13:13.509 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:13:13 compute-0 nova_compute[243704]: 2025-12-13 04:13:13.510 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1268536815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1268536815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:14 compute-0 ceph-mon[75071]: pgmap v1086: 305 pgs: 305 active+clean; 213 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 9.1 KiB/s wr, 162 op/s
Dec 13 04:13:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/91686733' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:13:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1268536815' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1268536815' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.311 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.510 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.511 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.511 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.519 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.520 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.520 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.520 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:14 compute-0 nova_compute[243704]: 2025-12-13 04:13:14.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:13:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Cumulative writes: 4951 writes, 22K keys, 4951 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 4951 writes, 4951 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1637 writes, 7640 keys, 1637 commit groups, 1.0 writes per commit group, ingest: 10.18 MB, 0.02 MB/s
                                           Interval WAL: 1637 writes, 1637 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     96.0      0.26              0.08        12    0.022       0      0       0.0       0.0
                                             L6      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.3     98.4     81.1      1.01              0.23        11    0.092     49K   5818       0.0       0.0
                                            Sum      1/0    7.51 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.3     78.3     84.2      1.28              0.31        23    0.055     49K   5818       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5     72.6     72.7      0.78              0.16        12    0.065     29K   3631       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     98.4     81.1      1.01              0.23        11    0.092     49K   5818       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    121.8      0.21              0.08        11    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1800.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.024, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.10 GB write, 0.06 MB/s write, 0.10 GB read, 0.06 MB/s read, 1.3 seconds
                                           Interval compaction: 0.06 GB write, 0.09 MB/s write, 0.06 GB read, 0.09 MB/s read, 0.8 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 304.00 MB usage: 9.18 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000119 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(563,8.77 MB,2.88467%) FilterBlock(24,143.30 KB,0.0460324%) IndexBlock(24,277.20 KB,0.0890481%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 04:13:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 14 KiB/s wr, 267 op/s
Dec 13 04:13:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:15 compute-0 nova_compute[243704]: 2025-12-13 04:13:15.984 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599180.982645, b63480b3-4ed8-4311-8742-e954945bfa74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:13:15 compute-0 nova_compute[243704]: 2025-12-13 04:13:15.985 243708 INFO nova.compute.manager [-] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] VM Stopped (Lifecycle Event)
Dec 13 04:13:16 compute-0 nova_compute[243704]: 2025-12-13 04:13:16.001 243708 DEBUG nova.compute.manager [None req-7de51fe0-a720-41ba-a27e-6d5cab5ce55f - - - - - -] [instance: b63480b3-4ed8-4311-8742-e954945bfa74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:13:16 compute-0 nova_compute[243704]: 2025-12-13 04:13:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:13:16 compute-0 nova_compute[243704]: 2025-12-13 04:13:16.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:13:16 compute-0 ceph-mon[75071]: pgmap v1087: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 14 KiB/s wr, 267 op/s
Dec 13 04:13:17 compute-0 nova_compute[243704]: 2025-12-13 04:13:17.087 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 11 KiB/s wr, 217 op/s
Dec 13 04:13:18 compute-0 ceph-mon[75071]: pgmap v1088: 305 pgs: 305 active+clean; 88 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 11 KiB/s wr, 217 op/s
Dec 13 04:13:18 compute-0 nova_compute[243704]: 2025-12-13 04:13:18.116 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:18 compute-0 nova_compute[243704]: 2025-12-13 04:13:18.460 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:18 compute-0 sudo[256289]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:13:18 compute-0 sudo[256289]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:18 compute-0 sudo[256289]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:18 compute-0 sudo[256314]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:13:18 compute-0 sudo[256314]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:19 compute-0 nova_compute[243704]: 2025-12-13 04:13:19.313 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:19 compute-0 sudo[256314]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:13:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:13:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:13:19 compute-0 sudo[256371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:13:19 compute-0 sudo[256371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:19 compute-0 sudo[256371]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:19 compute-0 sudo[256396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:13:19 compute-0 sudo[256396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:19 compute-0 podman[256433]: 2025-12-13 04:13:19.834582583 +0000 UTC m=+0.048169048 container create e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:13:19 compute-0 systemd[1]: Started libpod-conmon-e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7.scope.
Dec 13 04:13:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:19 compute-0 podman[256433]: 2025-12-13 04:13:19.805911225 +0000 UTC m=+0.019497710 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:20 compute-0 podman[256433]: 2025-12-13 04:13:20.010405335 +0000 UTC m=+0.223991840 container init e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle)
Dec 13 04:13:20 compute-0 podman[256433]: 2025-12-13 04:13:20.020623463 +0000 UTC m=+0.234209938 container start e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:20 compute-0 podman[256433]: 2025-12-13 04:13:20.025004402 +0000 UTC m=+0.238590877 container attach e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:13:20 compute-0 happy_banzai[256449]: 167 167
Dec 13 04:13:20 compute-0 systemd[1]: libpod-e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7.scope: Deactivated successfully.
Dec 13 04:13:20 compute-0 podman[256433]: 2025-12-13 04:13:20.034217702 +0000 UTC m=+0.247804187 container died e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d58dbea88bad1d6598bb084da353b058d0af93d07d149394d68382e051ccdf-merged.mount: Deactivated successfully.
Dec 13 04:13:20 compute-0 podman[256433]: 2025-12-13 04:13:20.100802589 +0000 UTC m=+0.314389054 container remove e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_banzai, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:13:20 compute-0 systemd[1]: libpod-conmon-e2964a5a219ab9736c0181391707651d7c846a246be07a213106c0962a0034c7.scope: Deactivated successfully.
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.273306332 +0000 UTC m=+0.051377095 container create b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:20 compute-0 systemd[1]: Started libpod-conmon-b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c.scope.
Dec 13 04:13:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.246954527 +0000 UTC m=+0.025025350 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.346856038 +0000 UTC m=+0.124926851 container init b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.360404286 +0000 UTC m=+0.138475069 container start b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.365890595 +0000 UTC m=+0.143961438 container attach b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:20 compute-0 ceph-mon[75071]: pgmap v1089: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 101 op/s
Dec 13 04:13:20 compute-0 stoic_shockley[256491]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:13:20 compute-0 stoic_shockley[256491]: --> All data devices are unavailable
Dec 13 04:13:20 compute-0 systemd[1]: libpod-b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c.scope: Deactivated successfully.
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.887916893 +0000 UTC m=+0.665987666 container died b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5ed7c41b1899b6e28d989d959c902e939add33ee67149dd67e1ba5a150979223-merged.mount: Deactivated successfully.
Dec 13 04:13:20 compute-0 podman[256475]: 2025-12-13 04:13:20.927766395 +0000 UTC m=+0.705837168 container remove b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_shockley, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:13:20 compute-0 systemd[1]: libpod-conmon-b3c410cc18bd82b2ef4b4d7fa0cb61c2cb7034a7ce2d52477e2ceee9ec56d59c.scope: Deactivated successfully.
Dec 13 04:13:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:20 compute-0 sudo[256396]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:21 compute-0 sudo[256522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:13:21 compute-0 sudo[256522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:21 compute-0 sudo[256522]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:21 compute-0 sudo[256547]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:13:21 compute-0 sudo[256547]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.41944575 +0000 UTC m=+0.042082273 container create 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 100 op/s
Dec 13 04:13:21 compute-0 systemd[1]: Started libpod-conmon-33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72.scope.
Dec 13 04:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.480720743 +0000 UTC m=+0.103357516 container init 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.486310075 +0000 UTC m=+0.108946568 container start 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:21 compute-0 confident_swartz[256601]: 167 167
Dec 13 04:13:21 compute-0 systemd[1]: libpod-33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72.scope: Deactivated successfully.
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.490869399 +0000 UTC m=+0.113506172 container attach 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.491332441 +0000 UTC m=+0.113968934 container died 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.399533429 +0000 UTC m=+0.022169952 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-326be7f758c58a903b4250fdd302ca123c1f9af1420b24b9873f55956a6d1ce0-merged.mount: Deactivated successfully.
Dec 13 04:13:21 compute-0 podman[256584]: 2025-12-13 04:13:21.526477695 +0000 UTC m=+0.149114178 container remove 33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:13:21 compute-0 systemd[1]: libpod-conmon-33635e8d662a7777d7e2eee59e666124b9a7fddb8df717df669165875821bf72.scope: Deactivated successfully.
Dec 13 04:13:21 compute-0 podman[256626]: 2025-12-13 04:13:21.674888243 +0000 UTC m=+0.042190536 container create 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:21 compute-0 systemd[1]: Started libpod-conmon-028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512.scope.
Dec 13 04:13:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbf3e35f131ac5096d6e08f7aa5a9d85df029707c19f9617802ee4aa4b2fb2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbf3e35f131ac5096d6e08f7aa5a9d85df029707c19f9617802ee4aa4b2fb2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbf3e35f131ac5096d6e08f7aa5a9d85df029707c19f9617802ee4aa4b2fb2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eccbf3e35f131ac5096d6e08f7aa5a9d85df029707c19f9617802ee4aa4b2fb2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:21 compute-0 podman[256626]: 2025-12-13 04:13:21.748145271 +0000 UTC m=+0.115447564 container init 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:13:21 compute-0 podman[256626]: 2025-12-13 04:13:21.654102579 +0000 UTC m=+0.021404902 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:21 compute-0 podman[256626]: 2025-12-13 04:13:21.758720718 +0000 UTC m=+0.126023011 container start 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:13:21 compute-0 podman[256626]: 2025-12-13 04:13:21.767582458 +0000 UTC m=+0.134884751 container attach 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:13:22 compute-0 nervous_feistel[256643]: {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     "0": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "devices": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "/dev/loop3"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             ],
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_name": "ceph_lv0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_size": "21470642176",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "name": "ceph_lv0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "tags": {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_name": "ceph",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.crush_device_class": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.encrypted": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.objectstore": "bluestore",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_id": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.vdo": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.with_tpm": "0"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             },
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "vg_name": "ceph_vg0"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         }
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     ],
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     "1": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "devices": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "/dev/loop4"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             ],
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_name": "ceph_lv1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_size": "21470642176",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "name": "ceph_lv1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "tags": {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_name": "ceph",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.crush_device_class": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.encrypted": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.objectstore": "bluestore",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_id": "1",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.vdo": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.with_tpm": "0"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             },
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "vg_name": "ceph_vg1"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         }
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     ],
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     "2": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "devices": [
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "/dev/loop5"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             ],
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_name": "ceph_lv2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_size": "21470642176",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "name": "ceph_lv2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "tags": {
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.cluster_name": "ceph",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.crush_device_class": "",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.encrypted": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.objectstore": "bluestore",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osd_id": "2",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.vdo": "0",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:                 "ceph.with_tpm": "0"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             },
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "type": "block",
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:             "vg_name": "ceph_vg2"
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:         }
Dec 13 04:13:22 compute-0 nervous_feistel[256643]:     ]
Dec 13 04:13:22 compute-0 nervous_feistel[256643]: }
Dec 13 04:13:22 compute-0 systemd[1]: libpod-028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512.scope: Deactivated successfully.
Dec 13 04:13:22 compute-0 podman[256626]: 2025-12-13 04:13:22.047924348 +0000 UTC m=+0.415226661 container died 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-eccbf3e35f131ac5096d6e08f7aa5a9d85df029707c19f9617802ee4aa4b2fb2-merged.mount: Deactivated successfully.
Dec 13 04:13:22 compute-0 nova_compute[243704]: 2025-12-13 04:13:22.089 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:22 compute-0 podman[256626]: 2025-12-13 04:13:22.101465882 +0000 UTC m=+0.468768165 container remove 028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_feistel, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:13:22 compute-0 systemd[1]: libpod-conmon-028426213a57fe3b62be0b504cdb1945eee96b3a5bac358718aeb07315642512.scope: Deactivated successfully.
Dec 13 04:13:22 compute-0 sudo[256547]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:22 compute-0 podman[256652]: 2025-12-13 04:13:22.169803357 +0000 UTC m=+0.092147383 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:13:22 compute-0 sudo[256691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:13:22 compute-0 sudo[256691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:22 compute-0 sudo[256691]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:22 compute-0 sudo[256717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:13:22 compute-0 sudo[256717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:22 compute-0 ceph-mon[75071]: pgmap v1090: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 4.5 KiB/s wr, 100 op/s
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.535249696 +0000 UTC m=+0.034985890 container create f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:13:22 compute-0 systemd[1]: Started libpod-conmon-f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2.scope.
Dec 13 04:13:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.614255661 +0000 UTC m=+0.113991905 container init f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.520551387 +0000 UTC m=+0.020287611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.622567477 +0000 UTC m=+0.122303671 container start f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.625470146 +0000 UTC m=+0.125206360 container attach f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:13:22 compute-0 mystifying_ishizaka[256769]: 167 167
Dec 13 04:13:22 compute-0 systemd[1]: libpod-f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2.scope: Deactivated successfully.
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.627982823 +0000 UTC m=+0.127719017 container died f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:13:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff908dc8a977f2aa3d0b63c901c70fbdbccfa1414146761eb718325969f83fd0-merged.mount: Deactivated successfully.
Dec 13 04:13:22 compute-0 podman[256752]: 2025-12-13 04:13:22.661784541 +0000 UTC m=+0.161520775 container remove f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_ishizaka, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:13:22 compute-0 systemd[1]: libpod-conmon-f51ff1da4bd6fa31d64e7665d5a2355be3707b29ffa7a40556015f7cea322ba2.scope: Deactivated successfully.
Dec 13 04:13:22 compute-0 podman[256793]: 2025-12-13 04:13:22.840520052 +0000 UTC m=+0.043498641 container create c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:13:22 compute-0 systemd[1]: Started libpod-conmon-c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7.scope.
Dec 13 04:13:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc3c25ed7bb8e18317a4d218d41a545d6b4786bd0b85ade57ad18d88ab9113f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc3c25ed7bb8e18317a4d218d41a545d6b4786bd0b85ade57ad18d88ab9113f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:22 compute-0 podman[256793]: 2025-12-13 04:13:22.820771026 +0000 UTC m=+0.023749645 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc3c25ed7bb8e18317a4d218d41a545d6b4786bd0b85ade57ad18d88ab9113f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc3c25ed7bb8e18317a4d218d41a545d6b4786bd0b85ade57ad18d88ab9113f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:13:22 compute-0 podman[256793]: 2025-12-13 04:13:22.929867917 +0000 UTC m=+0.132846536 container init c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:13:22 compute-0 podman[256793]: 2025-12-13 04:13:22.937942397 +0000 UTC m=+0.140920986 container start c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:13:22 compute-0 podman[256793]: 2025-12-13 04:13:22.941364889 +0000 UTC m=+0.144343498 container attach c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:13:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.7 KiB/s wr, 84 op/s
Dec 13 04:13:23 compute-0 lvm[256887]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:13:23 compute-0 lvm[256888]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:13:23 compute-0 lvm[256888]: VG ceph_vg1 finished
Dec 13 04:13:23 compute-0 lvm[256887]: VG ceph_vg0 finished
Dec 13 04:13:23 compute-0 lvm[256890]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:13:23 compute-0 lvm[256890]: VG ceph_vg2 finished
Dec 13 04:13:23 compute-0 lvm[256891]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:13:23 compute-0 lvm[256891]: VG ceph_vg0 finished
Dec 13 04:13:23 compute-0 zen_merkle[256809]: {}
Dec 13 04:13:23 compute-0 systemd[1]: libpod-c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7.scope: Deactivated successfully.
Dec 13 04:13:23 compute-0 systemd[1]: libpod-c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7.scope: Consumed 1.301s CPU time.
Dec 13 04:13:23 compute-0 podman[256793]: 2025-12-13 04:13:23.702324495 +0000 UTC m=+0.905303084 container died c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:13:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dc3c25ed7bb8e18317a4d218d41a545d6b4786bd0b85ade57ad18d88ab9113f-merged.mount: Deactivated successfully.
Dec 13 04:13:23 compute-0 podman[256793]: 2025-12-13 04:13:23.739147424 +0000 UTC m=+0.942126013 container remove c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_merkle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Dec 13 04:13:23 compute-0 systemd[1]: libpod-conmon-c74cb1337ceb64ed7c631bbda6de0ea84d665e52d943c7f134d88e1bdf037be7.scope: Deactivated successfully.
Dec 13 04:13:23 compute-0 sudo[256717]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:13:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:13:23 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:23 compute-0 sudo[256906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:13:23 compute-0 sudo[256906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:13:23 compute-0 sudo[256906]: pam_unix(sudo:session): session closed for user root
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.155 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599189.151604, b050eb13-af7e-4bd1-88e6-fcb2d100ffc8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.156 243708 INFO nova.compute.manager [-] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] VM Stopped (Lifecycle Event)
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.162 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599189.1621776, 229ab4a4-03ac-4686-bd94-9b1def9ec619 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.162 243708 INFO nova.compute.manager [-] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] VM Stopped (Lifecycle Event)
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.174 243708 DEBUG nova.compute.manager [None req-029d8ec7-468c-467f-a563-abb6e78f4eb0 - - - - - -] [instance: b050eb13-af7e-4bd1-88e6-fcb2d100ffc8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.178 243708 DEBUG nova.compute.manager [None req-dbc3d249-1e2f-44c9-a244-810355cc7fd0 - - - - - -] [instance: 229ab4a4-03ac-4686-bd94-9b1def9ec619] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:13:24 compute-0 nova_compute[243704]: 2025-12-13 04:13:24.343 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:24 compute-0 ceph-mon[75071]: pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.7 KiB/s wr, 84 op/s
Dec 13 04:13:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:24 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:13:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.7 KiB/s wr, 84 op/s
Dec 13 04:13:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:26 compute-0 ceph-mon[75071]: pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 3.7 KiB/s wr, 84 op/s
Dec 13 04:13:27 compute-0 nova_compute[243704]: 2025-12-13 04:13:27.092 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:28 compute-0 ceph-mon[75071]: pgmap v1093: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:29 compute-0 nova_compute[243704]: 2025-12-13 04:13:29.346 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:30 compute-0 ceph-mon[75071]: pgmap v1094: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:31 compute-0 ceph-mon[75071]: pgmap v1095: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:32 compute-0 nova_compute[243704]: 2025-12-13 04:13:32.096 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:33 compute-0 podman[256931]: 2025-12-13 04:13:33.936480205 +0000 UTC m=+0.075511721 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:13:34 compute-0 nova_compute[243704]: 2025-12-13 04:13:34.350 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:34 compute-0 ceph-mon[75071]: pgmap v1096: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:13:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:35.088 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:13:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:35.089 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:13:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:35.089 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:13:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Dec 13 04:13:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:36 compute-0 ceph-mon[75071]: pgmap v1097: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Dec 13 04:13:37 compute-0 nova_compute[243704]: 2025-12-13 04:13:37.098 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Dec 13 04:13:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2128317119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2128317119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:37 compute-0 ceph-mon[75071]: pgmap v1098: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 170 B/s wr, 0 op/s
Dec 13 04:13:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2128317119' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2128317119' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:39 compute-0 nova_compute[243704]: 2025-12-13 04:13:39.355 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:40 compute-0 ceph-mon[75071]: pgmap v1099: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:13:40
Dec 13 04:13:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:13:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:13:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', '.mgr', 'images', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control']
Dec 13 04:13:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:13:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465317458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/465317458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:42 compute-0 nova_compute[243704]: 2025-12-13 04:13:42.099 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:13:42 compute-0 ceph-mon[75071]: pgmap v1100: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/465317458' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/465317458' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:13:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:13:42 compute-0 podman[256952]: 2025-12-13 04:13:42.959498763 +0000 UTC m=+0.099214264 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 04:13:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:44 compute-0 nova_compute[243704]: 2025-12-13 04:13:44.359 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:44 compute-0 ceph-mon[75071]: pgmap v1101: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 682 B/s wr, 14 op/s
Dec 13 04:13:44 compute-0 nova_compute[243704]: 2025-12-13 04:13:44.713 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:44.717 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:13:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:44.721 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:13:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 04:13:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1114950281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1114950281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:46 compute-0 ceph-mon[75071]: pgmap v1102: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Dec 13 04:13:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1114950281' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1114950281' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:47 compute-0 nova_compute[243704]: 2025-12-13 04:13:47.101 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:13:48 compute-0 ceph-mon[75071]: pgmap v1103: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:13:49 compute-0 nova_compute[243704]: 2025-12-13 04:13:49.363 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:13:50 compute-0 ceph-mon[75071]: pgmap v1104: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:13:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 04:13:52 compute-0 nova_compute[243704]: 2025-12-13 04:13:52.103 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.5281952474626035e-06 of space, bias 1.0, pg target 0.00045845857423878106 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035189455154934624 of space, bias 1.0, pg target 0.10556836546480387 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 1.471156121355998e-07 of space, bias 1.0, pg target 4.413468364067994e-05 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659425095964737 of space, bias 1.0, pg target 0.19978275287894212 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.5279468462601796e-06 of space, bias 4.0, pg target 0.0018335362155122155 quantized to 16 (current 16)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:13:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:13:52 compute-0 ceph-mon[75071]: pgmap v1105: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 04:13:52 compute-0 podman[256973]: 2025-12-13 04:13:52.989517116 +0000 UTC m=+0.130433412 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:13:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 04:13:54 compute-0 nova_compute[243704]: 2025-12-13 04:13:54.366 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:54 compute-0 ceph-mon[75071]: pgmap v1106: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 511 B/s wr, 14 op/s
Dec 13 04:13:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:13:54.724 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:13:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 23 op/s
Dec 13 04:13:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:13:56 compute-0 ceph-mon[75071]: pgmap v1107: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.2 KiB/s wr, 23 op/s
Dec 13 04:13:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/754955004' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/754955004' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:57 compute-0 nova_compute[243704]: 2025-12-13 04:13:57.107 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 8 op/s
Dec 13 04:13:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/754955004' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/754955004' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:58 compute-0 ceph-mon[75071]: pgmap v1108: 305 pgs: 305 active+clean; 88 MiB data, 254 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 767 B/s wr, 8 op/s
Dec 13 04:13:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:13:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/386048906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:59 compute-0 nova_compute[243704]: 2025-12-13 04:13:59.370 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:13:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 04:13:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:13:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3626710124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:13:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3626710124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Dec 13 04:13:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/386048906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:13:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3626710124' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:13:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3626710124' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:13:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Dec 13 04:13:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Dec 13 04:14:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Dec 13 04:14:00 compute-0 ceph-mon[75071]: pgmap v1109: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 62 op/s
Dec 13 04:14:00 compute-0 ceph-mon[75071]: osdmap e211: 3 total, 3 up, 3 in
Dec 13 04:14:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Dec 13 04:14:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Dec 13 04:14:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 93 op/s
Dec 13 04:14:01 compute-0 ceph-mon[75071]: osdmap e212: 3 total, 3 up, 3 in
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.466 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:02 compute-0 ceph-mon[75071]: pgmap v1112: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 93 op/s
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.843 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.844 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.862 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.934 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.935 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.942 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:14:02 compute-0 nova_compute[243704]: 2025-12-13 04:14:02.942 243708 INFO nova.compute.claims [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:14:03 compute-0 nova_compute[243704]: 2025-12-13 04:14:03.027 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Dec 13 04:14:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955681744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:03 compute-0 nova_compute[243704]: 2025-12-13 04:14:03.555 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:03 compute-0 nova_compute[243704]: 2025-12-13 04:14:03.562 243708 DEBUG nova.compute.provider_tree [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:03 compute-0 ovn_controller[145204]: 2025-12-13T04:14:03Z|00077|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 13 04:14:03 compute-0 nova_compute[243704]: 2025-12-13 04:14:03.584 243708 DEBUG nova.scheduler.client.report [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1955681744' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.359 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.424s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.359 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.372 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.434 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.435 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.457 243708 INFO nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.482 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.531 243708 INFO nova.virt.block_device [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Booting with volume dbb792b2-7bd3-4b6f-9dc1-297684fbaa92 at /dev/vda
Dec 13 04:14:04 compute-0 ceph-mon[75071]: pgmap v1113: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 2.7 MiB/s wr, 79 op/s
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.634 243708 DEBUG nova.policy [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.705 243708 DEBUG os_brick.utils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.706 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.717 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.717 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[5752f81a-1b77-4166-816e-dad27c80362c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.719 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.727 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.728 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee7e689-bd8d-4ccc-8e1c-5a59928d08ac]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.730 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.739 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.739 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6081b9c1-ce8c-4cf8-aa8b-2b3cd6b14643]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.741 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[13a3babd-c820-417b-8531-5e24a8208175]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.741 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.774 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.777 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.778 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.778 243708 DEBUG os_brick.initiator.connectors.lightos [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.778 243708 DEBUG os_brick.utils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (72ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:14:04 compute-0 nova_compute[243704]: 2025-12-13 04:14:04.779 243708 DEBUG nova.virt.block_device [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Updating existing volume attachment record: 9d7c2b94-397a-47d3-8c22-067dbe3d5cd7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:14:04 compute-0 podman[257029]: 2025-12-13 04:14:04.900847955 +0000 UTC m=+0.044016505 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:14:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1218077736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:05 compute-0 nova_compute[243704]: 2025-12-13 04:14:05.348 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Successfully created port: 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:14:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.4 MiB/s wr, 145 op/s
Dec 13 04:14:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1218077736' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2643770775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.162 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.163 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.164 243708 INFO nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Creating image(s)
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.164 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.165 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Ensure instance console log exists: /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.165 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.165 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.166 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.334 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Successfully updated port: 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.361 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.361 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.362 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.530 243708 DEBUG nova.compute.manager [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Received event network-changed-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.531 243708 DEBUG nova.compute.manager [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Refreshing instance network info cache due to event network-changed-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.531 243708 DEBUG oslo_concurrency.lockutils [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Dec 13 04:14:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Dec 13 04:14:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Dec 13 04:14:06 compute-0 ceph-mon[75071]: pgmap v1114: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.4 MiB/s wr, 145 op/s
Dec 13 04:14:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2643770775' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:06 compute-0 nova_compute[243704]: 2025-12-13 04:14:06.895 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 66 op/s
Dec 13 04:14:07 compute-0 nova_compute[243704]: 2025-12-13 04:14:07.469 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:07 compute-0 ceph-mon[75071]: osdmap e213: 3 total, 3 up, 3 in
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.419 243708 DEBUG nova.network.neutron [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Updating instance_info_cache with network_info: [{"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.450 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.450 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Instance network_info: |[{"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.451 243708 DEBUG oslo_concurrency.lockutils [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.451 243708 DEBUG nova.network.neutron [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Refreshing network info cache for port 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.453 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Start _get_guest_xml network_info=[{"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '29c70ba3-89c9-4615-a1f0-22a3ad7145f8', 'attached_at': '', 'detached_at': '', 'volume_id': 'dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'serial': 'dbb792b2-7bd3-4b6f-9dc1-297684fbaa92'}, 'disk_bus': 'virtio', 'attachment_id': '9d7c2b94-397a-47d3-8c22-067dbe3d5cd7', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.457 243708 WARNING nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.463 243708 DEBUG nova.virt.libvirt.host [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.464 243708 DEBUG nova.virt.libvirt.host [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.471 243708 DEBUG nova.virt.libvirt.host [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.472 243708 DEBUG nova.virt.libvirt.host [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.472 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.472 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.473 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.473 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.473 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.473 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.474 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.474 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.474 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.474 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.474 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.475 243708 DEBUG nova.virt.hardware [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.497 243708 DEBUG nova.storage.rbd_utils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:08 compute-0 nova_compute[243704]: 2025-12-13 04:14:08.502 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:08 compute-0 ceph-mon[75071]: pgmap v1116: 305 pgs: 305 active+clean; 180 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 2.7 MiB/s wr, 66 op/s
Dec 13 04:14:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3538001048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.041 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.190 243708 DEBUG os_brick.encryptors [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Using volume encryption metadata '{'encryption_key_id': 'c97b11fc-55e6-429a-887f-5b18b3d1b580', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '29c70ba3-89c9-4615-a1f0-22a3ad7145f8', 'attached_at': '', 'detached_at': '', 'volume_id': 'dbb792b2-7bd3-4b6f-9dc1-297684fbaa92', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.192 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.218 243708 DEBUG barbicanclient.v1.secrets [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.219 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.283 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.285 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.356 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.357 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.375 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.378 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.378 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.419 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.420 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.443 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.443 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 106 op/s
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.466 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.466 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.490 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.491 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.528 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.529 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.552 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.553 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.587 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.588 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.622 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.623 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.648 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.649 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3538001048' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.680 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.681 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.711 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.712 243708 INFO barbicanclient.base [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Calculated Secrets uuid ref: secrets/c97b11fc-55e6-429a-887f-5b18b3d1b580
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.731 243708 DEBUG barbicanclient.client [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.731 243708 DEBUG nova.virt.libvirt.host [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <volume>dbb792b2-7bd3-4b6f-9dc1-297684fbaa92</volume>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:14:09 compute-0 nova_compute[243704]: </secret>
Dec 13 04:14:09 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.766 243708 DEBUG nova.virt.libvirt.vif [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1738934152',display_name='tempest-TestVolumeBootPattern-server-1738934152',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1738934152',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-yin0go4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:04Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=29c70ba3-89c9-4615-a1f0-22a3ad7145f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.768 243708 DEBUG nova.network.os_vif_util [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.769 243708 DEBUG nova.network.os_vif_util [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.773 243708 DEBUG nova.objects.instance [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.788 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <uuid>29c70ba3-89c9-4615-a1f0-22a3ad7145f8</uuid>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <name>instance-00000007</name>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-server-1738934152</nova:name>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:14:08</nova:creationTime>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <nova:port uuid="4ad43786-f24e-4f34-b3d9-ab5bf8f754c2">
Dec 13 04:14:09 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <system>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="serial">29c70ba3-89c9-4615-a1f0-22a3ad7145f8</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="uuid">29c70ba3-89c9-4615-a1f0-22a3ad7145f8</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </system>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <os>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </os>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <features>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </features>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-dbb792b2-7bd3-4b6f-9dc1-297684fbaa92">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <serial>dbb792b2-7bd3-4b6f-9dc1-297684fbaa92</serial>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:14:09 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="d2a3e9ee-1368-4435-8e92-fac20ec0a51f"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:21:fd:c9"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <target dev="tap4ad43786-f2"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/console.log" append="off"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <video>
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </video>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:14:09 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:14:09 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:14:09 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:14:09 compute-0 nova_compute[243704]: </domain>
Dec 13 04:14:09 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.790 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Preparing to wait for external event network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.790 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.790 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.791 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.792 243708 DEBUG nova.virt.libvirt.vif [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1738934152',display_name='tempest-TestVolumeBootPattern-server-1738934152',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1738934152',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-yin0go4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:04Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=29c70ba3-89c9-4615-a1f0-22a3ad7145f8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.792 243708 DEBUG nova.network.os_vif_util [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.793 243708 DEBUG nova.network.os_vif_util [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.794 243708 DEBUG os_vif [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.794 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.795 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.795 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.800 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.801 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ad43786-f2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.802 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4ad43786-f2, col_values=(('external_ids', {'iface-id': '4ad43786-f24e-4f34-b3d9-ab5bf8f754c2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:21:fd:c9', 'vm-uuid': '29c70ba3-89c9-4615-a1f0-22a3ad7145f8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.805 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:09 compute-0 NetworkManager[48899]: <info>  [1765599249.8069] manager: (tap4ad43786-f2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.809 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.817 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.819 243708 INFO os_vif [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2')
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.861 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.862 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.862 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:21:fd:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.863 243708 INFO nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Using config drive
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.899 243708 DEBUG nova.storage.rbd_utils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.908 243708 DEBUG nova.network.neutron [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Updated VIF entry in instance network info cache for port 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.909 243708 DEBUG nova.network.neutron [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Updating instance_info_cache with network_info: [{"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:09 compute-0 nova_compute[243704]: 2025-12-13 04:14:09.940 243708 DEBUG oslo_concurrency.lockutils [req-2223713c-c26e-4ce1-984e-866c5d3d0b9e req-ac54f0a7-2b2d-4a38-ae51-5974c524f995 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-29c70ba3-89c9-4615-a1f0-22a3ad7145f8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.298 243708 INFO nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Creating config drive at /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.305 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbuwj6klt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.440 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbuwj6klt" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.467 243708 DEBUG nova.storage.rbd_utils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.471 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config 29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.608 243708 DEBUG oslo_concurrency.processutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config 29c70ba3-89c9-4615-a1f0-22a3ad7145f8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.609 243708 INFO nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Deleting local config drive /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8/disk.config because it was imported into RBD.
Dec 13 04:14:10 compute-0 kernel: tap4ad43786-f2: entered promiscuous mode
Dec 13 04:14:10 compute-0 NetworkManager[48899]: <info>  [1765599250.6815] manager: (tap4ad43786-f2): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Dec 13 04:14:10 compute-0 ovn_controller[145204]: 2025-12-13T04:14:10Z|00078|binding|INFO|Claiming lport 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 for this chassis.
Dec 13 04:14:10 compute-0 ovn_controller[145204]: 2025-12-13T04:14:10Z|00079|binding|INFO|4ad43786-f24e-4f34-b3d9-ab5bf8f754c2: Claiming fa:16:3e:21:fd:c9 10.100.0.11
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.685 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:10 compute-0 ceph-mon[75071]: pgmap v1117: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 4.8 MiB/s wr, 106 op/s
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.693 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.702 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:fd:c9 10.100.0.11'], port_security=['fa:16:3e:21:fd:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '29c70ba3-89c9-4615-a1f0-22a3ad7145f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a51e76ad-e401-4d68-b2f5-a9d28269b3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.703 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.706 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:10 compute-0 systemd-udevd[257159]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.720 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[991c8fc2-c2ed-4544-842c-36b151ae43d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.722 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc553cd2-51 in ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:14:10 compute-0 systemd-machined[206767]: New machine qemu-7-instance-00000007.
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.725 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc553cd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.726 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6900cfed-8553-4ca5-b40b-cf4c17372192]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.727 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3c278bc0-d86c-423d-9167-11f0c3929e93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 NetworkManager[48899]: <info>  [1765599250.7299] device (tap4ad43786-f2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:14:10 compute-0 NetworkManager[48899]: <info>  [1765599250.7307] device (tap4ad43786-f2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.744 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1107b1-92ec-46b6-aa80-45fca9e2fcae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.762 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:10 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Dec 13 04:14:10 compute-0 ovn_controller[145204]: 2025-12-13T04:14:10Z|00080|binding|INFO|Setting lport 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 ovn-installed in OVS
Dec 13 04:14:10 compute-0 ovn_controller[145204]: 2025-12-13T04:14:10Z|00081|binding|INFO|Setting lport 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 up in Southbound
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.767 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.773 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e1aaa3d9-4350-4ede-9030-748588c36b35]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.809 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[475c0eae-5c11-412c-a659-3922c476f817]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 systemd-udevd[257163]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.817 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[94d829b7-f41c-40b4-90c7-38e7262fe0bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 NetworkManager[48899]: <info>  [1765599250.8187] manager: (tapfc553cd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.847 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[1e585499-ff99-4255-9e41-4a34c3b603ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.851 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b66338e2-8426-4faa-805a-b21541b788d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 nova_compute[243704]: 2025-12-13 04:14:10.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:10 compute-0 NetworkManager[48899]: <info>  [1765599250.8794] device (tapfc553cd2-50): carrier: link connected
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.883 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[93c71796-d25c-42ae-b1ee-9005eb740f9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.900 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f083a1a0-0c8e-422c-9050-31fced2091ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398955, 'reachable_time': 23106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257193, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.914 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e71e884e-e85e-4f70-aa83-4bd604f936b6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:ae9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 398955, 'tstamp': 398955}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257194, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.928 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5ea46e26-72df-4d2a-9b6b-23ceac851297]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 30], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398955, 'reachable_time': 23106, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257195, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:10.953 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[714f0d01-92c6-479f-8fcb-512735a6eb73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.010 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f73e877a-db0c-4eda-9a8c-607dfa3c406b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.011 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.012 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.012 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:11 compute-0 NetworkManager[48899]: <info>  [1765599251.0142] manager: (tapfc553cd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.013 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:11 compute-0 kernel: tapfc553cd2-50: entered promiscuous mode
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.016 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.017 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:11 compute-0 ovn_controller[145204]: 2025-12-13T04:14:11Z|00082|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.033 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.034 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.035 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5dae3249-7e67-48ba-90c0-6e69d9c87540]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.035 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:14:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:11.036 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'env', 'PROCESS_TAG=haproxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:14:11 compute-0 podman[257262]: 2025-12-13 04:14:11.407763849 +0000 UTC m=+0.054093819 container create b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:14:11 compute-0 systemd[1]: Started libpod-conmon-b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b.scope.
Dec 13 04:14:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 94 op/s
Dec 13 04:14:11 compute-0 podman[257262]: 2025-12-13 04:14:11.384166109 +0000 UTC m=+0.030496099 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:14:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb2a390d7684cd0e11c19b1e0a5ef7f15944bb5faa1a6a26e2dc7bb42b6d18b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:11 compute-0 podman[257262]: 2025-12-13 04:14:11.492444528 +0000 UTC m=+0.138774518 container init b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:14:11 compute-0 podman[257262]: 2025-12-13 04:14:11.496843087 +0000 UTC m=+0.143173057 container start b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:14:11 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [NOTICE]   (257281) : New worker (257283) forked
Dec 13 04:14:11 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [NOTICE]   (257281) : Loading success.
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.892 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.893 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.894 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.895 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.911 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.912 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.913 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.913 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:14:11 compute-0 nova_compute[243704]: 2025-12-13 04:14:11.914 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1268416946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.470 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.489 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.540 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.541 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:14:12 compute-0 ceph-mon[75071]: pgmap v1118: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 94 op/s
Dec 13 04:14:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1268416946' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.762 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.764 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4505MB free_disk=59.98819127306342GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.764 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.765 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.842 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.843 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.844 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.872 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.900 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.901 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.926 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.953 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:14:12 compute-0 nova_compute[243704]: 2025-12-13 04:14:12.987 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 94 op/s
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.480 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599253.4797204, 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.481 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] VM Started (Lifecycle Event)
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.511 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.515 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599253.48435, 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.516 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] VM Paused (Lifecycle Event)
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.532 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.534 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/613093822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.551 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.561 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.567 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.583 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.622 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:14:13 compute-0 nova_compute[243704]: 2025-12-13 04:14:13.623 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/613093822' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:13 compute-0 podman[257342]: 2025-12-13 04:14:13.917866603 +0000 UTC m=+0.064492632 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.606 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:14 compute-0 ceph-mon[75071]: pgmap v1119: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 4.3 MiB/s wr, 94 op/s
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.733 243708 DEBUG nova.compute.manager [req-68210019-e440-42c3-8ca0-e98743454ea1 req-e29f4442-ca04-4448-b0f8-dede810a0247 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Received event network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.734 243708 DEBUG oslo_concurrency.lockutils [req-68210019-e440-42c3-8ca0-e98743454ea1 req-e29f4442-ca04-4448-b0f8-dede810a0247 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.734 243708 DEBUG oslo_concurrency.lockutils [req-68210019-e440-42c3-8ca0-e98743454ea1 req-e29f4442-ca04-4448-b0f8-dede810a0247 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.734 243708 DEBUG oslo_concurrency.lockutils [req-68210019-e440-42c3-8ca0-e98743454ea1 req-e29f4442-ca04-4448-b0f8-dede810a0247 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.734 243708 DEBUG nova.compute.manager [req-68210019-e440-42c3-8ca0-e98743454ea1 req-e29f4442-ca04-4448-b0f8-dede810a0247 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Processing event network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.735 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.742 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599254.741683, 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.744 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] VM Resumed (Lifecycle Event)
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.747 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.754 243708 INFO nova.virt.libvirt.driver [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Instance spawned successfully.
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.755 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.781 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.788 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.788 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.789 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.789 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.789 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.790 243708 DEBUG nova.virt.libvirt.driver [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.797 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.806 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.823 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.856 243708 INFO nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Took 8.69 seconds to spawn the instance on the hypervisor.
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.857 243708 DEBUG nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.907 243708 INFO nova.compute.manager [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Took 12.00 seconds to build instance.
Dec 13 04:14:14 compute-0 nova_compute[243704]: 2025-12-13 04:14:14.921 243708 DEBUG oslo_concurrency.lockutils [None req-6928974f-eff3-4128-9158-f19c3eeb0747 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.538 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.539 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.556 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.634 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.635 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.644 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.645 243708 INFO nova.compute.claims [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.780 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:15 compute-0 nova_compute[243704]: 2025-12-13 04:14:15.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3593534811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.316 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.325 243708 DEBUG nova.compute.provider_tree [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.342 243708 DEBUG nova.scheduler.client.report [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.366 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.731s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.367 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.410 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.411 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.437 243708 INFO nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.457 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.497 243708 INFO nova.virt.block_device [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Booting with volume 102baa25-48d0-45c5-babd-916e42110eee at /dev/vda
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.619 243708 DEBUG os_brick.utils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.622 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.635 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.636 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a17f489e-a2f0-4bf2-aae3-1eba76eb79a4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.637 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.644 243708 DEBUG nova.policy [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '873a37f2f9d84afe9b5a4fe8861d0832', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cd7324f82be24328bd8a9643cc9032d8', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.647 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.648 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[757b6354-bbef-49a2-825a-2f43875a78d7]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.649 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.662 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.663 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6838a29a-38c8-426b-b994-588ba123b610]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.665 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[67817523-95a2-439b-94c6-85a394590a7d]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.666 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.696 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.699 243708 DEBUG os_brick.initiator.connectors.lightos [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.699 243708 DEBUG os_brick.initiator.connectors.lightos [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.700 243708 DEBUG os_brick.initiator.connectors.lightos [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.700 243708 DEBUG os_brick.utils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] <== get_connector_properties: return (79ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.700 243708 DEBUG nova.virt.block_device [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating existing volume attachment record: 4ef39ea5-a630-4c4f-bcba-ed5d5fe3a839 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:14:16 compute-0 ceph-mon[75071]: pgmap v1120: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 54 op/s
Dec 13 04:14:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3593534811' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.892 243708 DEBUG nova.compute.manager [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Received event network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.893 243708 DEBUG oslo_concurrency.lockutils [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.893 243708 DEBUG oslo_concurrency.lockutils [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.893 243708 DEBUG oslo_concurrency.lockutils [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.894 243708 DEBUG nova.compute.manager [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] No waiting events found dispatching network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:16 compute-0 nova_compute[243704]: 2025-12-13 04:14:16.894 243708 WARNING nova.compute.manager [req-a3197253-6ddd-4b5e-94f2-78fc52f85eb3 req-7bf78828-54ff-4a77-9952-86ee5fbbd3b6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Received unexpected event network-vif-plugged-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 for instance with vm_state active and task_state None.
Dec 13 04:14:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2460976570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 50 op/s
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.474 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.515 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Successfully created port: fd18f992-6376-4850-a95a-3f4ad2cbe95c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.722 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.725 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.725 243708 INFO nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Creating image(s)
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.726 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.726 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Ensure instance console log exists: /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.726 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.727 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.727 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2460976570' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:17 compute-0 nova_compute[243704]: 2025-12-13 04:14:17.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.121 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.122 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.122 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.122 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.123 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.124 243708 INFO nova.compute.manager [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Terminating instance
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.125 243708 DEBUG nova.compute.manager [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:14:18 compute-0 kernel: tap4ad43786-f2 (unregistering): left promiscuous mode
Dec 13 04:14:18 compute-0 NetworkManager[48899]: <info>  [1765599258.1681] device (tap4ad43786-f2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:18 compute-0 ovn_controller[145204]: 2025-12-13T04:14:18Z|00083|binding|INFO|Releasing lport 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 from this chassis (sb_readonly=0)
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.185 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 ovn_controller[145204]: 2025-12-13T04:14:18Z|00084|binding|INFO|Setting lport 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 down in Southbound
Dec 13 04:14:18 compute-0 ovn_controller[145204]: 2025-12-13T04:14:18Z|00085|binding|INFO|Removing iface tap4ad43786-f2 ovn-installed in OVS
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.189 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.195 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:21:fd:c9 10.100.0.11'], port_security=['fa:16:3e:21:fd:c9 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '29c70ba3-89c9-4615-a1f0-22a3ad7145f8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a51e76ad-e401-4d68-b2f5-a9d28269b3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.196 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.198 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.199 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[526e80ce-d066-4630-b4f0-3749308e9a58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.200 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace which is not needed anymore
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.208 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.214 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Successfully updated port: fd18f992-6376-4850-a95a-3f4ad2cbe95c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:14:18 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.228 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.229 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:18 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 3.117s CPU time.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.229 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:14:18 compute-0 systemd-machined[206767]: Machine qemu-7-instance-00000007 terminated.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.346 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.351 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.359 243708 INFO nova.virt.libvirt.driver [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Instance destroyed successfully.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.360 243708 DEBUG nova.objects.instance [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.373 243708 DEBUG nova.virt.libvirt.vif [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:14:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1738934152',display_name='tempest-TestVolumeBootPattern-server-1738934152',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1738934152',id=7,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:14:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-yin0go4j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:14:14Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=29c70ba3-89c9-4615-a1f0-22a3ad7145f8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.373 243708 DEBUG nova.network.os_vif_util [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "address": "fa:16:3e:21:fd:c9", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4ad43786-f2", "ovs_interfaceid": "4ad43786-f24e-4f34-b3d9-ab5bf8f754c2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.374 243708 DEBUG nova.network.os_vif_util [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.375 243708 DEBUG os_vif [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.380 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.380 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ad43786-f2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.383 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.385 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.389 243708 INFO os_vif [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:21:fd:c9,bridge_name='br-int',has_traffic_filtering=True,id=4ad43786-f24e-4f34-b3d9-ab5bf8f754c2,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4ad43786-f2')
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.455 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:18 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [NOTICE]   (257281) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:18 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [NOTICE]   (257281) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:18 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [WARNING]  (257281) : Exiting Master process...
Dec 13 04:14:18 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [ALERT]    (257281) : Current worker (257283) exited with code 143 (Terminated)
Dec 13 04:14:18 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[257277]: [WARNING]  (257281) : All workers exited. Exiting... (0)
Dec 13 04:14:18 compute-0 systemd[1]: libpod-b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b.scope: Deactivated successfully.
Dec 13 04:14:18 compute-0 podman[257424]: 2025-12-13 04:14:18.471869176 +0000 UTC m=+0.051661534 container died b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-ceb2a390d7684cd0e11c19b1e0a5ef7f15944bb5faa1a6a26e2dc7bb42b6d18b-merged.mount: Deactivated successfully.
Dec 13 04:14:18 compute-0 podman[257424]: 2025-12-13 04:14:18.515875561 +0000 UTC m=+0.095667919 container cleanup b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:14:18 compute-0 systemd[1]: libpod-conmon-b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b.scope: Deactivated successfully.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.550 243708 INFO nova.virt.libvirt.driver [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Deleting instance files /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8_del
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.551 243708 INFO nova.virt.libvirt.driver [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Deletion of /var/lib/nova/instances/29c70ba3-89c9-4615-a1f0-22a3ad7145f8_del complete
Dec 13 04:14:18 compute-0 podman[257474]: 2025-12-13 04:14:18.588033289 +0000 UTC m=+0.045232329 container remove b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.594 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[57bb4423-0e86-44be-91bf-0b66edc66317]: (4, ('Sat Dec 13 04:14:18 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b)\nb4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b\nSat Dec 13 04:14:18 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (b4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b)\nb4838d8e76877588616599e829bc93aa6ebf163b77352a8ea6072754f41dbe5b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.596 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4965e07c-24c1-4e1e-8505-b6cfdb88fe8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.597 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:18 compute-0 kernel: tapfc553cd2-50: left promiscuous mode
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.600 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.607 243708 INFO nova.compute.manager [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Took 0.48 seconds to destroy the instance on the hypervisor.
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.608 243708 DEBUG oslo.service.loopingcall [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.608 243708 DEBUG nova.compute.manager [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.608 243708 DEBUG nova.network.neutron [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.616 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.620 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8939cb39-200f-4c8e-9eec-4ebb6e7c7dd6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.642 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6a17425b-a232-4027-b809-5d0af08f5cef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.643 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5e6975a6-8301-4922-8276-5414b8812f10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.667 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7c25e255-3a00-4e33-9fd1-8292caa11d3f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 398948, 'reachable_time': 17060, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257490, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.670 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:14:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:18.670 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[c0bed36b-dc9f-4bf3-994a-144498716000]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:18 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc553cd2\x2d5dd5\x2d4d87\x2d97af\x2d4b4eeb4ca0b0.mount: Deactivated successfully.
Dec 13 04:14:18 compute-0 ceph-mon[75071]: pgmap v1121: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 50 op/s
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.871 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.970 243708 DEBUG nova.compute.manager [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.971 243708 DEBUG nova.compute.manager [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing instance network info cache due to event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:18 compute-0 nova_compute[243704]: 2025-12-13 04:14:18.971 243708 DEBUG oslo_concurrency.lockutils [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.548 243708 DEBUG nova.network.neutron [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.567 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.568 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance network_info: |[{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.568 243708 DEBUG oslo_concurrency.lockutils [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.569 243708 DEBUG nova.network.neutron [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.572 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Start _get_guest_xml network_info=[{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-102baa25-48d0-45c5-babd-916e42110eee', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '102baa25-48d0-45c5-babd-916e42110eee', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '55c4c422-4f9d-419b-90e2-15b632b4b37b', 'attached_at': '', 'detached_at': '', 'volume_id': '102baa25-48d0-45c5-babd-916e42110eee', 'serial': '102baa25-48d0-45c5-babd-916e42110eee'}, 'disk_bus': 'virtio', 'attachment_id': '4ef39ea5-a630-4c4f-bcba-ed5d5fe3a839', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.579 243708 WARNING nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.589 243708 DEBUG nova.virt.libvirt.host [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.590 243708 DEBUG nova.virt.libvirt.host [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.593 243708 DEBUG nova.virt.libvirt.host [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.594 243708 DEBUG nova.virt.libvirt.host [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.595 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.595 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.596 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.596 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.597 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.597 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.598 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.598 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.598 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.599 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.599 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.600 243708 DEBUG nova.virt.hardware [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.632 243708 DEBUG nova.storage.rbd_utils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] rbd image 55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.637 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.685 243708 DEBUG nova.network.neutron [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.705 243708 INFO nova.compute.manager [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Took 1.10 seconds to deallocate network for instance.
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.819 243708 DEBUG nova.compute.manager [req-e3abd1eb-9d85-4f60-a910-6fa2e76bec54 req-89b70476-ae09-4c4f-8960-8502d71b4b16 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Received event network-vif-deleted-4ad43786-f24e-4f34-b3d9-ab5bf8f754c2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.887 243708 INFO nova.compute.manager [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Took 0.18 seconds to detach 1 volumes for instance.
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.933 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:19 compute-0 nova_compute[243704]: 2025-12-13 04:14:19.934 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.002 243708 DEBUG oslo_concurrency.processutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/607408074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.212 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.240 243708 DEBUG nova.virt.libvirt.vif [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1076716418',display_name='tempest-TestVolumeBackupRestore-server-1076716418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1076716418',id=8,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwCfywGeX4KNWbwl/YWcIkB0JHthzxVSuDxRj+tFh1zCyCWmDXmFfuCy9/NO9tpLdJV+XuQs2LSQu3i8HmzFrc5PfbkJfnyVqG69i1o8kyTv1xbZEHG5R5XpGI/cRXEjA==',key_name='tempest-TestVolumeBackupRestore-66644377',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd7324f82be24328bd8a9643cc9032d8',ramdisk_id='',reservation_id='r-jt2u8rlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-978965736',owner_user_name='tempest-TestVolumeBackupRestore-978965736-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:16Z,user_data=None,user_id='873a37f2f9d84afe9b5a4fe8861d0832',uuid=55c4c422-4f9d-419b-90e2-15b632b4b37b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.241 243708 DEBUG nova.network.os_vif_util [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converting VIF {"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.242 243708 DEBUG nova.network.os_vif_util [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.245 243708 DEBUG nova.objects.instance [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lazy-loading 'pci_devices' on Instance uuid 55c4c422-4f9d-419b-90e2-15b632b4b37b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.259 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <uuid>55c4c422-4f9d-419b-90e2-15b632b4b37b</uuid>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <name>instance-00000008</name>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBackupRestore-server-1076716418</nova:name>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:14:19</nova:creationTime>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:user uuid="873a37f2f9d84afe9b5a4fe8861d0832">tempest-TestVolumeBackupRestore-978965736-project-member</nova:user>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:project uuid="cd7324f82be24328bd8a9643cc9032d8">tempest-TestVolumeBackupRestore-978965736</nova:project>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <nova:port uuid="fd18f992-6376-4850-a95a-3f4ad2cbe95c">
Dec 13 04:14:20 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <system>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="serial">55c4c422-4f9d-419b-90e2-15b632b4b37b</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="uuid">55c4c422-4f9d-419b-90e2-15b632b4b37b</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </system>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <os>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </os>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <features>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </features>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config">
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-102baa25-48d0-45c5-babd-916e42110eee">
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:20 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <serial>102baa25-48d0-45c5-babd-916e42110eee</serial>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:3f:26:b7"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <target dev="tapfd18f992-63"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/console.log" append="off"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <video>
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </video>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:14:20 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:14:20 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:14:20 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:14:20 compute-0 nova_compute[243704]: </domain>
Dec 13 04:14:20 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.261 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Preparing to wait for external event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.261 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.262 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.262 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.263 243708 DEBUG nova.virt.libvirt.vif [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1076716418',display_name='tempest-TestVolumeBackupRestore-server-1076716418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1076716418',id=8,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwCfywGeX4KNWbwl/YWcIkB0JHthzxVSuDxRj+tFh1zCyCWmDXmFfuCy9/NO9tpLdJV+XuQs2LSQu3i8HmzFrc5PfbkJfnyVqG69i1o8kyTv1xbZEHG5R5XpGI/cRXEjA==',key_name='tempest-TestVolumeBackupRestore-66644377',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cd7324f82be24328bd8a9643cc9032d8',ramdisk_id='',reservation_id='r-jt2u8rlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBackupRestore-978965736',owner_user_name='tempest-TestVolumeBackupRestore-978965736-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:16Z,user_data=None,user_id='873a37f2f9d84afe9b5a4fe8861d0832',uuid=55c4c422-4f9d-419b-90e2-15b632b4b37b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.264 243708 DEBUG nova.network.os_vif_util [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converting VIF {"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.264 243708 DEBUG nova.network.os_vif_util [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.265 243708 DEBUG os_vif [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.266 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.266 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.267 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.269 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.270 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfd18f992-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.270 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfd18f992-63, col_values=(('external_ids', {'iface-id': 'fd18f992-6376-4850-a95a-3f4ad2cbe95c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3f:26:b7', 'vm-uuid': '55c4c422-4f9d-419b-90e2-15b632b4b37b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.272 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:20 compute-0 NetworkManager[48899]: <info>  [1765599260.2736] manager: (tapfd18f992-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.278 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.280 243708 INFO os_vif [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63')
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.328 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.328 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.329 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] No VIF found with MAC fa:16:3e:3f:26:b7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.329 243708 INFO nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Using config drive
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.355 243708 DEBUG nova.storage.rbd_utils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] rbd image 55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2408796344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.561 243708 DEBUG oslo_concurrency.processutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.568 243708 DEBUG nova.network.neutron [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updated VIF entry in instance network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.569 243708 DEBUG nova.network.neutron [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.571 243708 DEBUG nova.compute.provider_tree [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.591 243708 DEBUG nova.scheduler.client.report [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.595 243708 DEBUG oslo_concurrency.lockutils [req-46a415e5-f8af-450b-aab7-2c537bfc0c3a req-07439f6b-3e8d-4dc8-b23f-1083808e99e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.610 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.611 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.612 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.629 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.637 243708 INFO nova.scheduler.client.report [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 29c70ba3-89c9-4615-a1f0-22a3ad7145f8
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.669 243708 INFO nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Creating config drive at /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.674 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzvrcrwpw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.704 243708 DEBUG oslo_concurrency.lockutils [None req-e8007586-fcf5-441d-8651-b5ed1015215e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "29c70ba3-89c9-4615-a1f0-22a3ad7145f8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.729 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.729 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.735 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.735 243708 INFO nova.compute.claims [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:14:20 compute-0 ceph-mon[75071]: pgmap v1122: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 59 op/s
Dec 13 04:14:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/607408074' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2408796344' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.802 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzvrcrwpw" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.828 243708 DEBUG nova.storage.rbd_utils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] rbd image 55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.831 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config 55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.889 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.976 243708 DEBUG oslo_concurrency.processutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config 55c4c422-4f9d-419b-90e2-15b632b4b37b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:20 compute-0 nova_compute[243704]: 2025-12-13 04:14:20.976 243708 INFO nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Deleting local config drive /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b/disk.config because it was imported into RBD.
Dec 13 04:14:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:21 compute-0 kernel: tapfd18f992-63: entered promiscuous mode
Dec 13 04:14:21 compute-0 systemd-udevd[257397]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:21 compute-0 ovn_controller[145204]: 2025-12-13T04:14:21Z|00086|binding|INFO|Claiming lport fd18f992-6376-4850-a95a-3f4ad2cbe95c for this chassis.
Dec 13 04:14:21 compute-0 ovn_controller[145204]: 2025-12-13T04:14:21Z|00087|binding|INFO|fd18f992-6376-4850-a95a-3f4ad2cbe95c: Claiming fa:16:3e:3f:26:b7 10.100.0.4
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.0396] manager: (tapfd18f992-63): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.040 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.043 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.0490] device (tapfd18f992-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.0499] device (tapfd18f992-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.060 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:26:b7 10.100.0.4'], port_security=['fa:16:3e:3f:26:b7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '55c4c422-4f9d-419b-90e2-15b632b4b37b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41eda195-3065-4521-82db-3eddd497e5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd7324f82be24328bd8a9643cc9032d8', 'neutron:revision_number': '2', 'neutron:security_group_ids': '4191713e-873d-4eb6-b762-8df3f67a8d44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64388f9f-7b49-475a-b3b3-2a91942711f1, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=fd18f992-6376-4850-a95a-3f4ad2cbe95c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.063 154842 INFO neutron.agent.ovn.metadata.agent [-] Port fd18f992-6376-4850-a95a-3f4ad2cbe95c in datapath 41eda195-3065-4521-82db-3eddd497e5cd bound to our chassis
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.066 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 41eda195-3065-4521-82db-3eddd497e5cd
Dec 13 04:14:21 compute-0 systemd-machined[206767]: New machine qemu-8-instance-00000008.
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.078 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4b98a794-37de-4875-9069-7d7fab317d93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.079 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap41eda195-31 in ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.081 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap41eda195-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.081 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[57aaa964-fd3f-4601-a61f-6470dd9ebfd8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.082 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a747cb-381d-410e-86b2-f3130582737b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.104 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[eb0d492a-7ce4-438f-aa2f-134d58df338e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.116 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 ovn_controller[145204]: 2025-12-13T04:14:21Z|00088|binding|INFO|Setting lport fd18f992-6376-4850-a95a-3f4ad2cbe95c ovn-installed in OVS
Dec 13 04:14:21 compute-0 ovn_controller[145204]: 2025-12-13T04:14:21Z|00089|binding|INFO|Setting lport fd18f992-6376-4850-a95a-3f4ad2cbe95c up in Southbound
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.122 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.126 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6b18de5d-d60d-442b-8936-d67f09a0323a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.165 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[4c9ed3db-010a-4d34-b661-132dc182b256]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.1779] manager: (tap41eda195-30): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.176 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cd90ba92-7182-47e0-9d80-91ebbedff46c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 systemd-udevd[257660]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.220 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[2a587a78-9a34-463f-ac0d-af6762c3ff57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.223 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[d7598611-d3b1-4d68-ac9f-0b2bf76e28a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.2557] device (tap41eda195-30): carrier: link connected
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.263 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[05bca87d-8968-40d1-84f3-f0f12fb3cb27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.283 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b18111a6-b79f-4d84-b6d7-7a3b0e100b5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41eda195-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:88:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399993, 'reachable_time': 26903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 257679, 'error': None, 'target': 'ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.296 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a02ac2-71d7-4173-9053-83ad41792d9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fefd:88a8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 399993, 'tstamp': 399993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 257680, 'error': None, 'target': 'ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.314 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[46febc44-bf60-44b1-91b9-5f29ed897bed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap41eda195-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:fd:88:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399993, 'reachable_time': 26903, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 257681, 'error': None, 'target': 'ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.342 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8b1c05-e769-40d1-a407-50c6e1812ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.404 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1f40635c-88c4-4ffe-9043-ec37b137cdca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913254169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/913254169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.407 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41eda195-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.409 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.410 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41eda195-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.413 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 kernel: tap41eda195-30: entered promiscuous mode
Dec 13 04:14:21 compute-0 NetworkManager[48899]: <info>  [1765599261.4143] manager: (tap41eda195-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.417 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.419 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap41eda195-30, col_values=(('external_ids', {'iface-id': '62dadfbb-b230-47ec-bb1f-540557aed3ae'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:21 compute-0 ovn_controller[145204]: 2025-12-13T04:14:21Z|00090|binding|INFO|Releasing lport 62dadfbb-b230-47ec-bb1f-540557aed3ae from this chassis (sb_readonly=0)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.421 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2623912641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.442 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.443 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/41eda195-3065-4521-82db-3eddd497e5cd.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/41eda195-3065-4521-82db-3eddd497e5cd.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.445 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0bfb0bac-c371-459d-a3cb-cdb1d1efecdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.447 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-41eda195-3065-4521-82db-3eddd497e5cd
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/41eda195-3065-4521-82db-3eddd497e5cd.pid.haproxy
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 41eda195-3065-4521-82db-3eddd497e5cd
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:14:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:21.450 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd', 'env', 'PROCESS_TAG=haproxy-41eda195-3065-4521-82db-3eddd497e5cd', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/41eda195-3065-4521-82db-3eddd497e5cd.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:14:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 23 op/s
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.460 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.467 243708 DEBUG nova.compute.provider_tree [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.484 243708 DEBUG nova.scheduler.client.report [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.511 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.512 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.552 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.552 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.569 243708 INFO nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.573 243708 DEBUG nova.compute.manager [req-fd607be1-1c37-4d4b-8bdc-3c144b030787 req-dd008b8a-3925-45af-a14a-98393cbb4aae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.574 243708 DEBUG oslo_concurrency.lockutils [req-fd607be1-1c37-4d4b-8bdc-3c144b030787 req-dd008b8a-3925-45af-a14a-98393cbb4aae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.574 243708 DEBUG oslo_concurrency.lockutils [req-fd607be1-1c37-4d4b-8bdc-3c144b030787 req-dd008b8a-3925-45af-a14a-98393cbb4aae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.574 243708 DEBUG oslo_concurrency.lockutils [req-fd607be1-1c37-4d4b-8bdc-3c144b030787 req-dd008b8a-3925-45af-a14a-98393cbb4aae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.574 243708 DEBUG nova.compute.manager [req-fd607be1-1c37-4d4b-8bdc-3c144b030787 req-dd008b8a-3925-45af-a14a-98393cbb4aae 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Processing event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.591 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.687 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.688 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.689 243708 INFO nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Creating image(s)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.710 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.732 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.753 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/913254169' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/913254169' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2623912641' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.756 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.781 243708 DEBUG nova.policy [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2848adac59524388ba4931e7afd46b47', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae283283ca5a4a4281495561d7b0443a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.784 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599261.708606, 55c4c422-4f9d-419b-90e2-15b632b4b37b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.784 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] VM Started (Lifecycle Event)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.787 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.791 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.794 243708 INFO nova.virt.libvirt.driver [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance spawned successfully.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.795 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:14:21 compute-0 podman[257808]: 2025-12-13 04:14:21.799805929 +0000 UTC m=+0.041050186 container create c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.808 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.815 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.822 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.823 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.824 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.824 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.825 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.825 243708 DEBUG nova.virt.libvirt.driver [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.828 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.829 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.829 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.830 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:21 compute-0 systemd[1]: Started libpod-conmon-c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4.scope.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.848 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.852 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.874 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.875 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599261.7087402, 55c4c422-4f9d-419b-90e2-15b632b4b37b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.875 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] VM Paused (Lifecycle Event)
Dec 13 04:14:21 compute-0 podman[257808]: 2025-12-13 04:14:21.780786622 +0000 UTC m=+0.022030899 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:14:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4204f78b9f688450c330b97c3ebebe208b41c08b3e291a0ed79f10ccbf275027/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.879 243708 INFO nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Took 4.16 seconds to spawn the instance on the hypervisor.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.879 243708 DEBUG nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.889 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:21 compute-0 podman[257808]: 2025-12-13 04:14:21.891459457 +0000 UTC m=+0.132703764 container init c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.892 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599261.7909024, 55c4c422-4f9d-419b-90e2-15b632b4b37b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.893 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] VM Resumed (Lifecycle Event)
Dec 13 04:14:21 compute-0 podman[257808]: 2025-12-13 04:14:21.898479367 +0000 UTC m=+0.139723634 container start c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.918 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.921 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:21 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [NOTICE]   (257852) : New worker (257869) forked
Dec 13 04:14:21 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [NOTICE]   (257852) : Loading success.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.938 243708 INFO nova.compute.manager [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Took 6.33 seconds to build instance.
Dec 13 04:14:21 compute-0 nova_compute[243704]: 2025-12-13 04:14:21.953 243708 DEBUG oslo_concurrency.lockutils [None req-3278497f-43cf-4816-a0f7-f3849eb9bd24 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.070 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.120 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] resizing rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.194 243708 DEBUG nova.objects.instance [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'migration_context' on Instance uuid 0ef2f9af-02e7-4df3-860b-d86160b330eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.208 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.208 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Ensure instance console log exists: /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.209 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.209 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.210 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.449 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Successfully created port: 635385b5-fbe6-4654-9100-d0f725eb1ee8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:14:22 compute-0 nova_compute[243704]: 2025-12-13 04:14:22.475 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:22 compute-0 ceph-mon[75071]: pgmap v1123: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 23 op/s
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.153 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Successfully updated port: 635385b5-fbe6-4654-9100-d0f725eb1ee8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.172 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.173 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquired lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.173 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.230 243708 DEBUG nova.compute.manager [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-changed-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.230 243708 DEBUG nova.compute.manager [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Refreshing instance network info cache due to event network-changed-635385b5-fbe6-4654-9100-d0f725eb1ee8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.231 243708 DEBUG oslo_concurrency.lockutils [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.329 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 23 op/s
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.647 243708 DEBUG nova.compute.manager [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.648 243708 DEBUG oslo_concurrency.lockutils [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.649 243708 DEBUG oslo_concurrency.lockutils [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.649 243708 DEBUG oslo_concurrency.lockutils [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.650 243708 DEBUG nova.compute.manager [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] No waiting events found dispatching network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:23 compute-0 nova_compute[243704]: 2025-12-13 04:14:23.650 243708 WARNING nova.compute.manager [req-132d65d3-3d3a-4493-8797-6c1ce357531d req-6add8878-d308-4cd7-8408-4ee1fdf8cd68 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received unexpected event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c for instance with vm_state active and task_state None.
Dec 13 04:14:23 compute-0 sudo[257966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:14:23 compute-0 podman[257954]: 2025-12-13 04:14:23.947582668 +0000 UTC m=+0.095905935 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec 13 04:14:23 compute-0 sudo[257966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:23 compute-0 sudo[257966]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:24 compute-0 sudo[258004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:14:24 compute-0 sudo[258004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.417 243708 DEBUG nova.network.neutron [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Updating instance_info_cache with network_info: [{"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.435 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Releasing lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.435 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Instance network_info: |[{"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.435 243708 DEBUG oslo_concurrency.lockutils [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.436 243708 DEBUG nova.network.neutron [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Refreshing network info cache for port 635385b5-fbe6-4654-9100-d0f725eb1ee8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.439 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Start _get_guest_xml network_info=[{"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.443 243708 WARNING nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.448 243708 DEBUG nova.virt.libvirt.host [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.449 243708 DEBUG nova.virt.libvirt.host [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.456 243708 DEBUG nova.virt.libvirt.host [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.456 243708 DEBUG nova.virt.libvirt.host [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.457 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.457 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.458 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.458 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.459 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.459 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.459 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.460 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.460 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.460 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.461 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.461 243708 DEBUG nova.virt.hardware [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:14:24 compute-0 nova_compute[243704]: 2025-12-13 04:14:24.464 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:24 compute-0 sudo[258004]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:24 compute-0 sudo[258080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:14:24 compute-0 sudo[258080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:24 compute-0 sudo[258080]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:24 compute-0 sudo[258105]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Dec 13 04:14:24 compute-0 sudo[258105]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:24 compute-0 ceph-mon[75071]: pgmap v1124: 305 pgs: 305 active+clean; 227 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 23 op/s
Dec 13 04:14:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/170308003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.012 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.042 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:25 compute-0 sudo[258105]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.047 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:25 compute-0 sudo[258169]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:14:25 compute-0 sudo[258169]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:25 compute-0 sudo[258169]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:25 compute-0 sudo[258196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:14:25 compute-0 sudo[258196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.272 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 273 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 13 04:14:25 compute-0 podman[258249]: 2025-12-13 04:14:25.538013618 +0000 UTC m=+0.053023150 container create b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961759316' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 systemd[1]: Started libpod-conmon-b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46.scope.
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.595 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.599 243708 DEBUG nova.virt.libvirt.vif [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1838749281',display_name='tempest-VolumesActionsTest-instance-1838749281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1838749281',id=9,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-cy3qz3u4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:21Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=0ef2f9af-02e7-4df3-860b-d86160b330eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.599 243708 DEBUG nova.network.os_vif_util [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.601 243708 DEBUG nova.network.os_vif_util [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.603 243708 DEBUG nova.objects.instance [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ef2f9af-02e7-4df3-860b-d86160b330eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:25 compute-0 podman[258249]: 2025-12-13 04:14:25.515316102 +0000 UTC m=+0.030325634 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.616 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <uuid>0ef2f9af-02e7-4df3-860b-d86160b330eb</uuid>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <name>instance-00000009</name>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesActionsTest-instance-1838749281</nova:name>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:14:24</nova:creationTime>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:user uuid="2848adac59524388ba4931e7afd46b47">tempest-VolumesActionsTest-1326765317-project-member</nova:user>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:project uuid="ae283283ca5a4a4281495561d7b0443a">tempest-VolumesActionsTest-1326765317</nova:project>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <nova:port uuid="635385b5-fbe6-4654-9100-d0f725eb1ee8">
Dec 13 04:14:25 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <system>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="serial">0ef2f9af-02e7-4df3-860b-d86160b330eb</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="uuid">0ef2f9af-02e7-4df3-860b-d86160b330eb</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </system>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <os>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </os>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <features>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </features>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/0ef2f9af-02e7-4df3-860b-d86160b330eb_disk">
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config">
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:25 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:17:f0:bc"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <target dev="tap635385b5-fb"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/console.log" append="off"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <video>
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </video>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:14:25 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:14:25 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:14:25 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:14:25 compute-0 nova_compute[243704]: </domain>
Dec 13 04:14:25 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:14:25 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.624 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Preparing to wait for external event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.624 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.624 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.625 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.626 243708 DEBUG nova.virt.libvirt.vif [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1838749281',display_name='tempest-VolumesActionsTest-instance-1838749281',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1838749281',id=9,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-cy3qz3u4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:21Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=0ef2f9af-02e7-4df3-860b-d86160b330eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.626 243708 DEBUG nova.network.os_vif_util [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.628 243708 DEBUG nova.network.os_vif_util [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.629 243708 DEBUG os_vif [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.630 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.631 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.631 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.635 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.636 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap635385b5-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.636 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap635385b5-fb, col_values=(('external_ids', {'iface-id': '635385b5-fbe6-4654-9100-d0f725eb1ee8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:f0:bc', 'vm-uuid': '0ef2f9af-02e7-4df3-860b-d86160b330eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:25 compute-0 NetworkManager[48899]: <info>  [1765599265.6816] manager: (tap635385b5-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.684 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:25 compute-0 podman[258249]: 2025-12-13 04:14:25.685906202 +0000 UTC m=+0.200915684 container init b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.691 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.692 243708 INFO os_vif [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb')
Dec 13 04:14:25 compute-0 podman[258249]: 2025-12-13 04:14:25.694383663 +0000 UTC m=+0.209393145 container start b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:25 compute-0 podman[258249]: 2025-12-13 04:14:25.697650061 +0000 UTC m=+0.212659623 container attach b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:14:25 compute-0 wonderful_almeida[258268]: 167 167
Dec 13 04:14:25 compute-0 systemd[1]: libpod-b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46.scope: Deactivated successfully.
Dec 13 04:14:25 compute-0 conmon[258268]: conmon b4114b99765e36f72939 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46.scope/container/memory.events
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.744 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.745 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.746 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No VIF found with MAC fa:16:3e:17:f0:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.746 243708 INFO nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Using config drive
Dec 13 04:14:25 compute-0 podman[258276]: 2025-12-13 04:14:25.750765383 +0000 UTC m=+0.032320478 container died b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:14:25 compute-0 nova_compute[243704]: 2025-12-13 04:14:25.773 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4439d704e88f0936013136791562a4b0164c05c2655825c792a133b4ae8b515-merged.mount: Deactivated successfully.
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/170308003' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:14:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/961759316' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:25 compute-0 podman[258276]: 2025-12-13 04:14:25.793299517 +0000 UTC m=+0.074854592 container remove b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wonderful_almeida, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:25 compute-0 systemd[1]: libpod-conmon-b4114b99765e36f72939b10c50c87753afbf98830cbc7597336fd903db243f46.scope: Deactivated successfully.
Dec 13 04:14:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.13008626 +0000 UTC m=+0.043525513 container create ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:14:26 compute-0 systemd[1]: Started libpod-conmon-ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13.scope.
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.11056075 +0000 UTC m=+0.024000023 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:26 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.235154761 +0000 UTC m=+0.148594034 container init ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.247249789 +0000 UTC m=+0.160689032 container start ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.250453577 +0000 UTC m=+0.163892820 container attach ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.294 243708 DEBUG nova.network.neutron [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Updated VIF entry in instance network info cache for port 635385b5-fbe6-4654-9100-d0f725eb1ee8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.296 243708 DEBUG nova.network.neutron [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Updating instance_info_cache with network_info: [{"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.311 243708 DEBUG oslo_concurrency.lockutils [req-81cdb225-7b84-4125-bd45-5913b326d218 req-cf0a8f21-a000-4082-849a-cd81903df5e6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-0ef2f9af-02e7-4df3-860b-d86160b330eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.401 243708 INFO nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Creating config drive at /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.406 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw9o9g8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.537 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmw9o9g8n" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:26 compute-0 NetworkManager[48899]: <info>  [1765599266.5653] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Dec 13 04:14:26 compute-0 NetworkManager[48899]: <info>  [1765599266.5678] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.581 243708 DEBUG nova.storage.rbd_utils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.618 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.642 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.821 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 ceph-mon[75071]: pgmap v1125: 305 pgs: 305 active+clean; 273 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 140 op/s
Dec 13 04:14:26 compute-0 ovn_controller[145204]: 2025-12-13T04:14:26Z|00091|binding|INFO|Releasing lport 62dadfbb-b230-47ec-bb1f-540557aed3ae from this chassis (sb_readonly=0)
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.832 243708 DEBUG oslo_concurrency.processutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config 0ef2f9af-02e7-4df3-860b-d86160b330eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.833 243708 INFO nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Deleting local config drive /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb/disk.config because it was imported into RBD.
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.842 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 distracted_robinson[258330]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:14:26 compute-0 distracted_robinson[258330]: --> All data devices are unavailable
Dec 13 04:14:26 compute-0 kernel: tap635385b5-fb: entered promiscuous mode
Dec 13 04:14:26 compute-0 ovn_controller[145204]: 2025-12-13T04:14:26Z|00092|binding|INFO|Claiming lport 635385b5-fbe6-4654-9100-d0f725eb1ee8 for this chassis.
Dec 13 04:14:26 compute-0 ovn_controller[145204]: 2025-12-13T04:14:26Z|00093|binding|INFO|635385b5-fbe6-4654-9100-d0f725eb1ee8: Claiming fa:16:3e:17:f0:bc 10.100.0.13
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.905 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 NetworkManager[48899]: <info>  [1765599266.9096] manager: (tap635385b5-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Dec 13 04:14:26 compute-0 systemd[1]: libpod-ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13.scope: Deactivated successfully.
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.916809213 +0000 UTC m=+0.830248476 container died ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.914 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:f0:bc 10.100.0.13'], port_security=['fa:16:3e:17:f0:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0ef2f9af-02e7-4df3-860b-d86160b330eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=635385b5-fbe6-4654-9100-d0f725eb1ee8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.917 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 635385b5-fbe6-4654-9100-d0f725eb1ee8 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 bound to our chassis
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.921 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:26 compute-0 ovn_controller[145204]: 2025-12-13T04:14:26Z|00094|binding|INFO|Setting lport 635385b5-fbe6-4654-9100-d0f725eb1ee8 ovn-installed in OVS
Dec 13 04:14:26 compute-0 ovn_controller[145204]: 2025-12-13T04:14:26Z|00095|binding|INFO|Setting lport 635385b5-fbe6-4654-9100-d0f725eb1ee8 up in Southbound
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.939 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 nova_compute[243704]: 2025-12-13 04:14:26.940 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:26 compute-0 systemd-machined[206767]: New machine qemu-9-instance-00000009.
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.946 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fd4a9db9-5be3-47ad-8a05-6677f284f83f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.953 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb949b294-41 in ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.961 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb949b294-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.962 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2433efd3-307d-46e9-bf39-86aee017c83d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.963 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4c6d8c2d-715c-46c2-9443-480d82204a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:26 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Dec 13 04:14:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-a1e49ffead565f1be6a2eef114413c2086687ede69b427525052ebbcbb4152cb-merged.mount: Deactivated successfully.
Dec 13 04:14:26 compute-0 podman[258313]: 2025-12-13 04:14:26.972835534 +0000 UTC m=+0.886274777 container remove ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:14:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:26.983 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[fa794a50-fee0-4f4e-a850-da7a38a8ec3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 systemd[1]: libpod-conmon-ea85b7a0c355d46ac37775f6e49e886a78490167812038aa28ea6ba3a7829e13.scope: Deactivated successfully.
Dec 13 04:14:27 compute-0 systemd-udevd[258418]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.006 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[98a8e84c-6de4-4b43-bded-0c995c030a3f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 NetworkManager[48899]: <info>  [1765599267.0192] device (tap635385b5-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:14:27 compute-0 NetworkManager[48899]: <info>  [1765599267.0203] device (tap635385b5-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:14:27 compute-0 sudo[258196]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.049 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[82cb6086-3b73-4832-9900-d70baa4fca03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 NetworkManager[48899]: <info>  [1765599267.0596] manager: (tapb949b294-40): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.058 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[138f06fc-1374-4fab-a380-fbdc3a3ac09b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.099 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[38dcc830-77ff-45a8-815c-85d1a915c041]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.103 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[33ba5e56-519f-41e4-a5f0-3d53a58c1e99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 sudo[258428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:14:27 compute-0 sudo[258428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:27 compute-0 sudo[258428]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:27 compute-0 NetworkManager[48899]: <info>  [1765599267.1344] device (tapb949b294-40): carrier: link connected
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.140 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[f539e562-1a88-4057-863b-9609e3501939]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.163 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[242015f5-e648-43d2-be31-a6dc02194070]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb949b294-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:86:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400581, 'reachable_time': 15420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258485, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 sudo[258473]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:14:27 compute-0 sudo[258473]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.184 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d2fccc55-fdf6-457e-be85-db200d345a56]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:86b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 400581, 'tstamp': 400581}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258497, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.209 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ee92fd2d-f118-4c40-807b-d10a86ebac84]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb949b294-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:86:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 35], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400581, 'reachable_time': 15420, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 258500, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.247 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[510859ce-324a-4101-a58d-49f08843f289]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.327 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fa06061f-b24c-400d-bf60-dc08e4033f87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.330 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb949b294-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.330 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.331 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb949b294-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:27 compute-0 NetworkManager[48899]: <info>  [1765599267.3335] manager: (tapb949b294-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Dec 13 04:14:27 compute-0 kernel: tapb949b294-40: entered promiscuous mode
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.332 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.336 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb949b294-40, col_values=(('external_ids', {'iface-id': 'ea765a2c-5afd-40d3-8875-8e9014d19426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.337 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:27 compute-0 ovn_controller[145204]: 2025-12-13T04:14:27Z|00096|binding|INFO|Releasing lport ea765a2c-5afd-40d3-8875-8e9014d19426 from this chassis (sb_readonly=0)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.352 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.354 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.355 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7bcc44a9-e213-476a-a7fd-5ca86c87ef34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.356 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:14:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:27.358 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'env', 'PROCESS_TAG=haproxy-b949b294-460a-4397-aeb7-2ff487ba5063', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b949b294-460a-4397-aeb7-2ff487ba5063.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.366 243708 DEBUG nova.compute.manager [req-e49cdfce-8aef-4c67-a24a-0ca7494d9715 req-5c89b2b9-5975-409b-9c7a-67c5258800d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.366 243708 DEBUG oslo_concurrency.lockutils [req-e49cdfce-8aef-4c67-a24a-0ca7494d9715 req-5c89b2b9-5975-409b-9c7a-67c5258800d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.367 243708 DEBUG oslo_concurrency.lockutils [req-e49cdfce-8aef-4c67-a24a-0ca7494d9715 req-5c89b2b9-5975-409b-9c7a-67c5258800d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.367 243708 DEBUG oslo_concurrency.lockutils [req-e49cdfce-8aef-4c67-a24a-0ca7494d9715 req-5c89b2b9-5975-409b-9c7a-67c5258800d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.367 243708 DEBUG nova.compute.manager [req-e49cdfce-8aef-4c67-a24a-0ca7494d9715 req-5c89b2b9-5975-409b-9c7a-67c5258800d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Processing event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.426 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.427 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599267.426914, 0ef2f9af-02e7-4df3-860b-d86160b330eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.428 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] VM Started (Lifecycle Event)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.433 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.437 243708 INFO nova.virt.libvirt.driver [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Instance spawned successfully.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.437 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:14:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 273 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.455 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.457 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.457 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.458 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.458 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.459 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.459 243708 DEBUG nova.virt.libvirt.driver [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.470 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.473999188 +0000 UTC m=+0.041407004 container create 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.477 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.494 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.494 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599267.4270105, 0ef2f9af-02e7-4df3-860b-d86160b330eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.494 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] VM Paused (Lifecycle Event)
Dec 13 04:14:27 compute-0 systemd[1]: Started libpod-conmon-98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069.scope.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.523 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.527 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599267.4325325, 0ef2f9af-02e7-4df3-860b-d86160b330eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.527 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] VM Resumed (Lifecycle Event)
Dec 13 04:14:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.533 243708 INFO nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Took 5.85 seconds to spawn the instance on the hypervisor.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.534 243708 DEBUG nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.544 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.547428322 +0000 UTC m=+0.114836158 container init 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.549 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.459285169 +0000 UTC m=+0.026693015 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.555636714 +0000 UTC m=+0.123044530 container start 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.56028367 +0000 UTC m=+0.127691506 container attach 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:14:27 compute-0 angry_faraday[258580]: 167 167
Dec 13 04:14:27 compute-0 conmon[258580]: conmon 98f7f5e4eb0adbe755f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069.scope/container/memory.events
Dec 13 04:14:27 compute-0 systemd[1]: libpod-98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069.scope: Deactivated successfully.
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.565951204 +0000 UTC m=+0.133359020 container died 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.577 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-a83b1689c22b8ac464c0d09d788c455c902d4027a3d3ba4b845177c6134738db-merged.mount: Deactivated successfully.
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.600 243708 INFO nova.compute.manager [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Took 6.90 seconds to build instance.
Dec 13 04:14:27 compute-0 podman[258564]: 2025-12-13 04:14:27.603594826 +0000 UTC m=+0.171002642 container remove 98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_faraday, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 13 04:14:27 compute-0 nova_compute[243704]: 2025-12-13 04:14:27.616 243708 DEBUG oslo_concurrency.lockutils [None req-274357f7-4704-4cf9-8040-d841ca16a67b 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:27 compute-0 systemd[1]: libpod-conmon-98f7f5e4eb0adbe755f6e37c81cbc17f0db0eca74be8e11dc0f8945654238069.scope: Deactivated successfully.
Dec 13 04:14:27 compute-0 podman[258619]: 2025-12-13 04:14:27.786059669 +0000 UTC m=+0.058248142 container create a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS)
Dec 13 04:14:27 compute-0 podman[258635]: 2025-12-13 04:14:27.816158755 +0000 UTC m=+0.053894733 container create 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:14:27 compute-0 systemd[1]: Started libpod-conmon-a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e.scope.
Dec 13 04:14:27 compute-0 podman[258619]: 2025-12-13 04:14:27.7529612 +0000 UTC m=+0.025149703 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:14:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:27 compute-0 systemd[1]: Started libpod-conmon-1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885.scope.
Dec 13 04:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0107bad5d6bd3c1136fe005c29a41bb143435206905fdc09dd76eff15bc45541/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:27 compute-0 podman[258619]: 2025-12-13 04:14:27.878344734 +0000 UTC m=+0.150533217 container init a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:14:27 compute-0 podman[258619]: 2025-12-13 04:14:27.886420653 +0000 UTC m=+0.158609126 container start a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:14:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:27 compute-0 podman[258635]: 2025-12-13 04:14:27.799135653 +0000 UTC m=+0.036871651 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ee9527a76739c3d4495074f020297aeb58a5320609998bf7e2dbbc40c895aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ee9527a76739c3d4495074f020297aeb58a5320609998bf7e2dbbc40c895aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ee9527a76739c3d4495074f020297aeb58a5320609998bf7e2dbbc40c895aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04ee9527a76739c3d4495074f020297aeb58a5320609998bf7e2dbbc40c895aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:27 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [NOTICE]   (258663) : New worker (258665) forked
Dec 13 04:14:27 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [NOTICE]   (258663) : Loading success.
Dec 13 04:14:27 compute-0 podman[258635]: 2025-12-13 04:14:27.91798195 +0000 UTC m=+0.155717928 container init 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:14:27 compute-0 podman[258635]: 2025-12-13 04:14:27.926986544 +0000 UTC m=+0.164722522 container start 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:14:27 compute-0 podman[258635]: 2025-12-13 04:14:27.930499479 +0000 UTC m=+0.168235457 container attach 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:28 compute-0 compassionate_ride[258659]: {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     "0": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "devices": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "/dev/loop3"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             ],
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_name": "ceph_lv0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_size": "21470642176",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "name": "ceph_lv0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "tags": {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_name": "ceph",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.crush_device_class": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.encrypted": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.objectstore": "bluestore",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_id": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.vdo": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.with_tpm": "0"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             },
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "vg_name": "ceph_vg0"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         }
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     ],
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     "1": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "devices": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "/dev/loop4"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             ],
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_name": "ceph_lv1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_size": "21470642176",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "name": "ceph_lv1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "tags": {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_name": "ceph",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.crush_device_class": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.encrypted": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.objectstore": "bluestore",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_id": "1",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.vdo": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.with_tpm": "0"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             },
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "vg_name": "ceph_vg1"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         }
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     ],
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     "2": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "devices": [
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "/dev/loop5"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             ],
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_name": "ceph_lv2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_size": "21470642176",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "name": "ceph_lv2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "tags": {
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.cluster_name": "ceph",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.crush_device_class": "",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.encrypted": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.objectstore": "bluestore",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osd_id": "2",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.vdo": "0",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:                 "ceph.with_tpm": "0"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             },
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "type": "block",
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:             "vg_name": "ceph_vg2"
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:         }
Dec 13 04:14:28 compute-0 compassionate_ride[258659]:     ]
Dec 13 04:14:28 compute-0 compassionate_ride[258659]: }
Dec 13 04:14:28 compute-0 systemd[1]: libpod-1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885.scope: Deactivated successfully.
Dec 13 04:14:28 compute-0 podman[258635]: 2025-12-13 04:14:28.263469157 +0000 UTC m=+0.501205135 container died 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:14:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-04ee9527a76739c3d4495074f020297aeb58a5320609998bf7e2dbbc40c895aa-merged.mount: Deactivated successfully.
Dec 13 04:14:28 compute-0 podman[258635]: 2025-12-13 04:14:28.306501136 +0000 UTC m=+0.544237114 container remove 1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:14:28 compute-0 systemd[1]: libpod-conmon-1bf53e37415b193a5516820258ccb3d6af506be2cf97cf91f0081e46bfdd7885.scope: Deactivated successfully.
Dec 13 04:14:28 compute-0 sudo[258473]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:28 compute-0 sudo[258692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:14:28 compute-0 sudo[258692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:28 compute-0 sudo[258692]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:28 compute-0 sudo[258717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:14:28 compute-0 sudo[258717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:28 compute-0 ceph-mon[75071]: pgmap v1126: 305 pgs: 305 active+clean; 273 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 130 op/s
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.025894953 +0000 UTC m=+0.060180114 container create 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:29 compute-0 systemd[1]: Started libpod-conmon-09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c.scope.
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.001537852 +0000 UTC m=+0.035823043 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.123368649 +0000 UTC m=+0.157653820 container init 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.130587865 +0000 UTC m=+0.164873036 container start 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:14:29 compute-0 nifty_shirley[258770]: 167 167
Dec 13 04:14:29 compute-0 systemd[1]: libpod-09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c.scope: Deactivated successfully.
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.148378778 +0000 UTC m=+0.182663939 container attach 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.149098017 +0000 UTC m=+0.183383168 container died 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-75346a7b03c242b4bdac215f5df88865d0cdd4cd2ebe375c2fbca4d670c891f0-merged.mount: Deactivated successfully.
Dec 13 04:14:29 compute-0 podman[258754]: 2025-12-13 04:14:29.189948346 +0000 UTC m=+0.224233507 container remove 09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:14:29 compute-0 systemd[1]: libpod-conmon-09169a14d3d343f26926b11dc81ae992d6811eca6fe84c6efefd3dabdbb9ce5c.scope: Deactivated successfully.
Dec 13 04:14:29 compute-0 podman[258794]: 2025-12-13 04:14:29.394963561 +0000 UTC m=+0.053678698 container create bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:14:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 211 op/s
Dec 13 04:14:29 compute-0 systemd[1]: Started libpod-conmon-bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9.scope.
Dec 13 04:14:29 compute-0 podman[258794]: 2025-12-13 04:14:29.370599669 +0000 UTC m=+0.029314856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.463 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.467 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing instance network info cache due to event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.468 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.469 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.469 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982c5d4e249175326cbd0e42a0fd17a40a26da60c812a0847da66d516037c90f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982c5d4e249175326cbd0e42a0fd17a40a26da60c812a0847da66d516037c90f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982c5d4e249175326cbd0e42a0fd17a40a26da60c812a0847da66d516037c90f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/982c5d4e249175326cbd0e42a0fd17a40a26da60c812a0847da66d516037c90f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:29 compute-0 podman[258794]: 2025-12-13 04:14:29.514241639 +0000 UTC m=+0.172956796 container init bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:14:29 compute-0 podman[258794]: 2025-12-13 04:14:29.525882125 +0000 UTC m=+0.184597252 container start bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:14:29 compute-0 podman[258794]: 2025-12-13 04:14:29.530925612 +0000 UTC m=+0.189640739 container attach bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.944 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.945 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.945 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.946 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.946 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.947 243708 INFO nova.compute.manager [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Terminating instance
Dec 13 04:14:29 compute-0 nova_compute[243704]: 2025-12-13 04:14:29.948 243708 DEBUG nova.compute.manager [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:14:29 compute-0 kernel: tap635385b5-fb (unregistering): left promiscuous mode
Dec 13 04:14:29 compute-0 NetworkManager[48899]: <info>  [1765599269.9922] device (tap635385b5-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:30 compute-0 ovn_controller[145204]: 2025-12-13T04:14:30Z|00097|binding|INFO|Releasing lport 635385b5-fbe6-4654-9100-d0f725eb1ee8 from this chassis (sb_readonly=0)
Dec 13 04:14:30 compute-0 ovn_controller[145204]: 2025-12-13T04:14:30Z|00098|binding|INFO|Setting lport 635385b5-fbe6-4654-9100-d0f725eb1ee8 down in Southbound
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.003 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 ovn_controller[145204]: 2025-12-13T04:14:30Z|00099|binding|INFO|Removing iface tap635385b5-fb ovn-installed in OVS
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.006 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.015 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:f0:bc 10.100.0.13'], port_security=['fa:16:3e:17:f0:bc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '0ef2f9af-02e7-4df3-860b-d86160b330eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=635385b5-fbe6-4654-9100-d0f725eb1ee8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.016 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 635385b5-fbe6-4654-9100-d0f725eb1ee8 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 unbound from our chassis
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.018 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b949b294-460a-4397-aeb7-2ff487ba5063, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.020 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[20867466-1f8e-4f27-9d31-7d33b02a7245]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.023 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 namespace which is not needed anymore
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.028 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Dec 13 04:14:30 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 2.952s CPU time.
Dec 13 04:14:30 compute-0 systemd-machined[206767]: Machine qemu-9-instance-00000009 terminated.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.171 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.179 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [NOTICE]   (258663) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:30 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [NOTICE]   (258663) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:30 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [WARNING]  (258663) : Exiting Master process...
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.189 243708 INFO nova.virt.libvirt.driver [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Instance destroyed successfully.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.190 243708 DEBUG nova.objects.instance [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'resources' on Instance uuid 0ef2f9af-02e7-4df3-860b-d86160b330eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:30 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [ALERT]    (258663) : Current worker (258665) exited with code 143 (Terminated)
Dec 13 04:14:30 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[258654]: [WARNING]  (258663) : All workers exited. Exiting... (0)
Dec 13 04:14:30 compute-0 systemd[1]: libpod-a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e.scope: Deactivated successfully.
Dec 13 04:14:30 compute-0 podman[258881]: 2025-12-13 04:14:30.201282298 +0000 UTC m=+0.058993653 container died a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.206 243708 DEBUG nova.virt.libvirt.vif [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:14:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1838749281',display_name='tempest-VolumesActionsTest-instance-1838749281',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1838749281',id=9,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:14:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-cy3qz3u4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:14:27Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=0ef2f9af-02e7-4df3-860b-d86160b330eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.206 243708 DEBUG nova.network.os_vif_util [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "address": "fa:16:3e:17:f0:bc", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap635385b5-fb", "ovs_interfaceid": "635385b5-fbe6-4654-9100-d0f725eb1ee8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.207 243708 DEBUG nova.network.os_vif_util [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.208 243708 DEBUG os_vif [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.211 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.211 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap635385b5-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.213 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.216 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.220 243708 INFO os_vif [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:f0:bc,bridge_name='br-int',has_traffic_filtering=True,id=635385b5-fbe6-4654-9100-d0f725eb1ee8,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap635385b5-fb')
Dec 13 04:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0107bad5d6bd3c1136fe005c29a41bb143435206905fdc09dd76eff15bc45541-merged.mount: Deactivated successfully.
Dec 13 04:14:30 compute-0 podman[258881]: 2025-12-13 04:14:30.251180582 +0000 UTC m=+0.108891937 container cleanup a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:14:30 compute-0 systemd[1]: libpod-conmon-a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e.scope: Deactivated successfully.
Dec 13 04:14:30 compute-0 podman[258950]: 2025-12-13 04:14:30.338586044 +0000 UTC m=+0.052977449 container remove a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.349 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[238e3034-88aa-4add-ab2a-3e833aaf0021]: (4, ('Sat Dec 13 04:14:30 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 (a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e)\na06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e\nSat Dec 13 04:14:30 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 (a06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e)\na06d47d4e04d63e1c5d98d250fb82d0433983a41ffbd684a978f6e9c341ca16e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.351 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5a75f0ab-1cd2-4acc-a020-8b5cac7da2b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.354 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb949b294-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:30 compute-0 kernel: tapb949b294-40: left promiscuous mode
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.356 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 lvm[258973]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:14:30 compute-0 lvm[258973]: VG ceph_vg0 finished
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.411 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.415 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1b7502dc-177b-4abe-ab46-e9e4d30c4e6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.430 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[abc0083c-826d-4587-9ce4-36dc6ab8bca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.434 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb53d5a-c84e-4479-bdac-447a588e06f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 lvm[258976]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:14:30 compute-0 lvm[258976]: VG ceph_vg1 finished
Dec 13 04:14:30 compute-0 lvm[258981]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:14:30 compute-0 lvm[258981]: VG ceph_vg2 finished
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.456 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4bef0cc7-3b46-4632-a2db-5ea1c08ef0af]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 400571, 'reachable_time': 28526, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258980, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 systemd[1]: run-netns-ovnmeta\x2db949b294\x2d460a\x2d4397\x2daeb7\x2d2ff487ba5063.mount: Deactivated successfully.
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.462 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:14:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:30.463 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[9c589021-20df-4616-a999-399cafc4d83b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:30 compute-0 serene_panini[258810]: {}
Dec 13 04:14:30 compute-0 systemd[1]: libpod-bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9.scope: Deactivated successfully.
Dec 13 04:14:30 compute-0 podman[258794]: 2025-12-13 04:14:30.548516073 +0000 UTC m=+1.207231200 container died bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:14:30 compute-0 systemd[1]: libpod-bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9.scope: Consumed 1.532s CPU time.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.567 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updated VIF entry in instance network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.567 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-982c5d4e249175326cbd0e42a0fd17a40a26da60c812a0847da66d516037c90f-merged.mount: Deactivated successfully.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.587 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.589 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.590 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.590 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.590 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.590 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] No waiting events found dispatching network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.590 243708 WARNING nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received unexpected event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 for instance with vm_state active and task_state None.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.591 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.591 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing instance network info cache due to event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.591 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.591 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.591 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:30 compute-0 podman[258794]: 2025-12-13 04:14:30.60182112 +0000 UTC m=+1.260536247 container remove bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_panini, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:14:30 compute-0 systemd[1]: libpod-conmon-bbb3e8abc56f98273be28aa7906664cdf51a2958915a83204a18e191c38a89b9.scope: Deactivated successfully.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.631 243708 INFO nova.virt.libvirt.driver [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Deleting instance files /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb_del
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.632 243708 INFO nova.virt.libvirt.driver [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Deletion of /var/lib/nova/instances/0ef2f9af-02e7-4df3-860b-d86160b330eb_del complete
Dec 13 04:14:30 compute-0 sudo[258717]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:14:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:14:30 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.685 243708 INFO nova.compute.manager [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.686 243708 DEBUG oslo.service.loopingcall [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.686 243708 DEBUG nova.compute.manager [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:14:30 compute-0 nova_compute[243704]: 2025-12-13 04:14:30.686 243708 DEBUG nova.network.neutron [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:14:30 compute-0 sudo[258995]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:14:30 compute-0 sudo[258995]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:14:30 compute-0 sudo[258995]: pam_unix(sudo:session): session closed for user root
Dec 13 04:14:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e213 do_prune osdmap full prune enabled
Dec 13 04:14:30 compute-0 ceph-mon[75071]: pgmap v1127: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.7 MiB/s wr, 211 op/s
Dec 13 04:14:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:30 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:14:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e214 e214: 3 total, 3 up, 3 in
Dec 13 04:14:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e214: 3 total, 3 up, 3 in
Dec 13 04:14:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.451 243708 DEBUG nova.network.neutron [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.469 243708 INFO nova.compute.manager [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Took 0.78 seconds to deallocate network for instance.
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.515 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.516 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.519 243708 DEBUG nova.compute.manager [req-9e7819f3-7cd5-488d-b7c1-6f1b741e5a1e req-8824a9e6-954d-470f-a4ae-e55a0f5761c8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-vif-deleted-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.600 243708 DEBUG nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-vif-unplugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.601 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.601 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.601 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.602 243708 DEBUG nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] No waiting events found dispatching network-vif-unplugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.602 243708 WARNING nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received unexpected event network-vif-unplugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 for instance with vm_state deleted and task_state None.
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.602 243708 DEBUG nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.602 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.603 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.603 243708 DEBUG oslo_concurrency.lockutils [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.603 243708 DEBUG nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] No waiting events found dispatching network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.603 243708 WARNING nova.compute.manager [req-d2160f29-ac11-4d03-9105-0669af402795 req-d1e097bc-5621-4acf-846d-4188cb2516af 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Received unexpected event network-vif-plugged-635385b5-fbe6-4654-9100-d0f725eb1ee8 for instance with vm_state deleted and task_state None.
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.616 243708 DEBUG oslo_concurrency.processutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:31 compute-0 ceph-mon[75071]: osdmap e214: 3 total, 3 up, 3 in
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.975 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updated VIF entry in instance network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.976 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.991 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.991 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.992 243708 DEBUG nova.compute.manager [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing instance network info cache due to event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.992 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.992 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:31 compute-0 nova_compute[243704]: 2025-12-13 04:14:31.992 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3839852141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.271 243708 DEBUG oslo_concurrency.processutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.655s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.277 243708 DEBUG nova.compute.provider_tree [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.301 243708 DEBUG nova.scheduler.client.report [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.325 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.352 243708 INFO nova.scheduler.client.report [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Deleted allocations for instance 0ef2f9af-02e7-4df3-860b-d86160b330eb
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.416 243708 DEBUG oslo_concurrency.lockutils [None req-7f3a23df-3d39-49c4-98bb-aa13f45e095a 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "0ef2f9af-02e7-4df3-860b-d86160b330eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.479 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:32 compute-0 ceph-mon[75071]: pgmap v1129: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Dec 13 04:14:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3839852141' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.952 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updated VIF entry in instance network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.953 243708 DEBUG nova.network.neutron [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [{"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:32 compute-0 nova_compute[243704]: 2025-12-13 04:14:32.977 243708 DEBUG oslo_concurrency.lockutils [req-a09d3f9e-fc80-4f63-a155-c3d1c9f196a9 req-21028a27-99af-428b-a328-e47c8a2194e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:33 compute-0 nova_compute[243704]: 2025-12-13 04:14:33.357 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599258.3567784, 29c70ba3-89c9-4615-a1f0-22a3ad7145f8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:33 compute-0 nova_compute[243704]: 2025-12-13 04:14:33.358 243708 INFO nova.compute.manager [-] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] VM Stopped (Lifecycle Event)
Dec 13 04:14:33 compute-0 nova_compute[243704]: 2025-12-13 04:14:33.373 243708 DEBUG nova.compute.manager [None req-4ea2c6e9-35af-4dcc-86e6-e1a450ba2de1 - - - - - -] [instance: 29c70ba3-89c9-4615-a1f0-22a3ad7145f8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Dec 13 04:14:34 compute-0 ceph-mon[75071]: pgmap v1130: 305 pgs: 305 active+clean; 299 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 6.0 MiB/s rd, 3.2 MiB/s wr, 236 op/s
Dec 13 04:14:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:35.089 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:35.090 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:35.091 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:35 compute-0 nova_compute[243704]: 2025-12-13 04:14:35.213 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 293 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 209 op/s
Dec 13 04:14:35 compute-0 ovn_controller[145204]: 2025-12-13T04:14:35Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3f:26:b7 10.100.0.4
Dec 13 04:14:35 compute-0 ovn_controller[145204]: 2025-12-13T04:14:35Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3f:26:b7 10.100.0.4
Dec 13 04:14:35 compute-0 podman[259042]: 2025-12-13 04:14:35.936169536 +0000 UTC m=+0.069275445 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:14:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.577 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.578 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.606 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.712 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.713 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.720 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.721 243708 INFO nova.compute.claims [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:14:36 compute-0 nova_compute[243704]: 2025-12-13 04:14:36.828 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:36 compute-0 ceph-mon[75071]: pgmap v1131: 305 pgs: 305 active+clean; 293 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 209 op/s
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.272 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.272 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.283 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.335 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1326333257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.407 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.416 243708 DEBUG nova.compute.provider_tree [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.430 243708 DEBUG nova.scheduler.client.report [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.448 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.449 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.452 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 293 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 209 op/s
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.460 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.461 243708 INFO nova.compute.claims [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.482 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.559 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.560 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.674 243708 INFO nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.742 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.782 243708 INFO nova.virt.block_device [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Booting with volume snapshot 2d15c13c-501f-417e-a54a-136d70cfd5a9 at /dev/vda
Dec 13 04:14:37 compute-0 nova_compute[243704]: 2025-12-13 04:14:37.800 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1326333257' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:14:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 4008 syncs, 3.17 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6775 writes, 19K keys, 6775 commit groups, 1.0 writes per commit group, ingest: 14.65 MB, 0.02 MB/s
                                           Interval WAL: 6775 writes, 2979 syncs, 2.27 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.141 243708 DEBUG nova.policy [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:14:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1612052057' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.369 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.374 243708 DEBUG nova.compute.provider_tree [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.391 243708 DEBUG nova.scheduler.client.report [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.408 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.409 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.453 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.453 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.470 243708 INFO nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.489 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.582 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.583 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.584 243708 INFO nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Creating image(s)
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.608 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.635 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.657 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.661 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.691 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Successfully created port: 5a4345e6-0422-4f4a-affc-2c1023f05fe6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.697 243708 DEBUG nova.policy [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2848adac59524388ba4931e7afd46b47', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ae283283ca5a4a4281495561d7b0443a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.735 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.736 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.736 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.737 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.756 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:38 compute-0 nova_compute[243704]: 2025-12-13 04:14:38.761 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:38 compute-0 ceph-mon[75071]: pgmap v1132: 305 pgs: 305 active+clean; 293 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 3.8 MiB/s wr, 209 op/s
Dec 13 04:14:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1612052057' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.022 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.261s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.086 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] resizing rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.157 243708 DEBUG nova.objects.instance [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'migration_context' on Instance uuid 4b2c5a9d-6552-48bf-92c4-1032bd4d509b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.170 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.171 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Ensure instance console log exists: /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.171 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.172 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.172 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 184 op/s
Dec 13 04:14:39 compute-0 nova_compute[243704]: 2025-12-13 04:14:39.673 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Successfully created port: 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.215 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.562 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Successfully updated port: 5a4345e6-0422-4f4a-affc-2c1023f05fe6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.577 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.577 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.577 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:14:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:14:40
Dec 13 04:14:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:14:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:14:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', '.rgw.root', '.mgr', 'default.rgw.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', 'volumes']
Dec 13 04:14:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.663 243708 DEBUG nova.compute.manager [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Received event network-changed-5a4345e6-0422-4f4a-affc-2c1023f05fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.663 243708 DEBUG nova.compute.manager [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Refreshing instance network info cache due to event network-changed-5a4345e6-0422-4f4a-affc-2c1023f05fe6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.664 243708 DEBUG oslo_concurrency.lockutils [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:40 compute-0 nova_compute[243704]: 2025-12-13 04:14:40.749 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:40 compute-0 ceph-mon[75071]: pgmap v1133: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.3 MiB/s wr, 184 op/s
Dec 13 04:14:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e214 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.306 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Successfully updated port: 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.320 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.321 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquired lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.321 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:14:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 174 op/s
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.565 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.572 243708 DEBUG nova.compute.manager [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-changed-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.573 243708 DEBUG nova.compute.manager [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Refreshing instance network info cache due to event network-changed-6a19f0b6-14ea-4fee-b454-cf0d6746dc05. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:41 compute-0 nova_compute[243704]: 2025-12-13 04:14:41.573 243708 DEBUG oslo_concurrency.lockutils [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.029 243708 DEBUG nova.network.neutron [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Updating instance_info_cache with network_info: [{"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.055 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.056 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Instance network_info: |[{"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.056 243708 DEBUG oslo_concurrency.lockutils [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.056 243708 DEBUG nova.network.neutron [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Refreshing network info cache for port 5a4345e6-0422-4f4a-affc-2c1023f05fe6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.100 243708 DEBUG os_brick.utils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.103 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.114 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.115 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b5aea199-eeae-42d9-bbd2-e5d165023a63]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.116 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.123 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.123 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[52120fbe-bdc1-4044-b0a1-d40df63b9f35]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.124 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.132 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.132 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fef330-5a07-457a-b279-9d3d1e956f6b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.134 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[d087ba57-9a1b-456a-9c59-4b95f0e41e60]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.135 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.158 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.162 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.162 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.162 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.163 243708 DEBUG os_brick.utils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (61ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.163 243708 DEBUG nova.virt.block_device [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Updating existing volume attachment record: c54019be-d4d7-458b-ac6f-d39073f7ba32 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.275 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.276 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.276 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.276 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.277 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.278 243708 INFO nova.compute.manager [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Terminating instance
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.279 243708 DEBUG nova.compute.manager [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:14:42 compute-0 kernel: tapfd18f992-63 (unregistering): left promiscuous mode
Dec 13 04:14:42 compute-0 NetworkManager[48899]: <info>  [1765599282.3351] device (tapfd18f992-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.346 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 ovn_controller[145204]: 2025-12-13T04:14:42Z|00100|binding|INFO|Releasing lport fd18f992-6376-4850-a95a-3f4ad2cbe95c from this chassis (sb_readonly=0)
Dec 13 04:14:42 compute-0 ovn_controller[145204]: 2025-12-13T04:14:42Z|00101|binding|INFO|Setting lport fd18f992-6376-4850-a95a-3f4ad2cbe95c down in Southbound
Dec 13 04:14:42 compute-0 ovn_controller[145204]: 2025-12-13T04:14:42Z|00102|binding|INFO|Removing iface tapfd18f992-63 ovn-installed in OVS
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.356 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3f:26:b7 10.100.0.4'], port_security=['fa:16:3e:3f:26:b7 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '55c4c422-4f9d-419b-90e2-15b632b4b37b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-41eda195-3065-4521-82db-3eddd497e5cd', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cd7324f82be24328bd8a9643cc9032d8', 'neutron:revision_number': '4', 'neutron:security_group_ids': '4191713e-873d-4eb6-b762-8df3f67a8d44', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=64388f9f-7b49-475a-b3b3-2a91942711f1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=fd18f992-6376-4850-a95a-3f4ad2cbe95c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.358 154842 INFO neutron.agent.ovn.metadata.agent [-] Port fd18f992-6376-4850-a95a-3f4ad2cbe95c in datapath 41eda195-3065-4521-82db-3eddd497e5cd unbound from our chassis
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.360 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 41eda195-3065-4521-82db-3eddd497e5cd, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.362 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2cbef58d-84ea-4de3-883a-cfa438dd0cd3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.363 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd namespace which is not needed anymore
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.365 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Dec 13 04:14:42 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.336s CPU time.
Dec 13 04:14:42 compute-0 systemd-machined[206767]: Machine qemu-8-instance-00000008 terminated.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.485 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.511 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [NOTICE]   (257852) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [NOTICE]   (257852) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [WARNING]  (257852) : Exiting Master process...
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [WARNING]  (257852) : Exiting Master process...
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [ALERT]    (257852) : Current worker (257869) exited with code 143 (Terminated)
Dec 13 04:14:42 compute-0 neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd[257847]: [WARNING]  (257852) : All workers exited. Exiting... (0)
Dec 13 04:14:42 compute-0 systemd[1]: libpod-c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4.scope: Deactivated successfully.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.529 243708 INFO nova.virt.libvirt.driver [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance destroyed successfully.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.530 243708 DEBUG nova.objects.instance [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lazy-loading 'resources' on Instance uuid 55c4c422-4f9d-419b-90e2-15b632b4b37b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:42 compute-0 podman[259302]: 2025-12-13 04:14:42.532070928 +0000 UTC m=+0.054368092 container died c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.548 243708 DEBUG nova.virt.libvirt.vif [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:14:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBackupRestore-server-1076716418',display_name='tempest-TestVolumeBackupRestore-server-1076716418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebackuprestore-server-1076716418',id=8,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKwCfywGeX4KNWbwl/YWcIkB0JHthzxVSuDxRj+tFh1zCyCWmDXmFfuCy9/NO9tpLdJV+XuQs2LSQu3i8HmzFrc5PfbkJfnyVqG69i1o8kyTv1xbZEHG5R5XpGI/cRXEjA==',key_name='tempest-TestVolumeBackupRestore-66644377',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:14:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cd7324f82be24328bd8a9643cc9032d8',ramdisk_id='',reservation_id='r-jt2u8rlp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBackupRestore-978965736',owner_user_name='tempest-TestVolumeBackupRestore-978965736-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:14:21Z,user_data=None,user_id='873a37f2f9d84afe9b5a4fe8861d0832',uuid=55c4c422-4f9d-419b-90e2-15b632b4b37b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.549 243708 DEBUG nova.network.os_vif_util [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converting VIF {"id": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "address": "fa:16:3e:3f:26:b7", "network": {"id": "41eda195-3065-4521-82db-3eddd497e5cd", "bridge": "br-int", "label": "tempest-TestVolumeBackupRestore-887832243-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cd7324f82be24328bd8a9643cc9032d8", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfd18f992-63", "ovs_interfaceid": "fd18f992-6376-4850-a95a-3f4ad2cbe95c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.550 243708 DEBUG nova.network.os_vif_util [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.550 243708 DEBUG os_vif [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.553 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.554 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfd18f992-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.555 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.557 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.560 243708 INFO os_vif [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3f:26:b7,bridge_name='br-int',has_traffic_filtering=True,id=fd18f992-6376-4850-a95a-3f4ad2cbe95c,network=Network(41eda195-3065-4521-82db-3eddd497e5cd),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfd18f992-63')
Dec 13 04:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-4204f78b9f688450c330b97c3ebebe208b41c08b3e291a0ed79f10ccbf275027-merged.mount: Deactivated successfully.
Dec 13 04:14:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:42 compute-0 podman[259302]: 2025-12-13 04:14:42.57580287 +0000 UTC m=+0.098100004 container cleanup c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202)
Dec 13 04:14:42 compute-0 systemd[1]: libpod-conmon-c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4.scope: Deactivated successfully.
Dec 13 04:14:42 compute-0 podman[259354]: 2025-12-13 04:14:42.6593472 +0000 UTC m=+0.053487528 container remove c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.665 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8e4d3e30-b273-4edf-bc36-819b4a248c24]: (4, ('Sat Dec 13 04:14:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd (c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4)\nc0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4\nSat Dec 13 04:14:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd (c0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4)\nc0afd1ae2e02e1ef516a19436454644a9b1c27a6c704070d2c8e5409872b5de4\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.667 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c675aab6-8c1a-45e0-803a-d90ae57cffd5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.668 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap41eda195-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.670 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 kernel: tap41eda195-30: left promiscuous mode
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.686 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.689 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[93042290-9a2a-4715-af33-7d85feab78f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.703 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6171e489-9f42-472c-9d02-6c579fdded0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.704 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8715aa95-5fb0-4b31-8364-0aa8bdf0b086]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.723 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2999bd7b-0622-4eff-889a-84a990939770]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 399983, 'reachable_time': 21350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259374, 'error': None, 'target': 'ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.727 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-41eda195-3065-4521-82db-3eddd497e5cd deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:14:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:42.727 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[56287cb5-02f0-45f3-98c6-f749c3200e50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d41eda195\x2d3065\x2d4521\x2d82db\x2d3eddd497e5cd.mount: Deactivated successfully.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.735 243708 INFO nova.virt.libvirt.driver [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Deleting instance files /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b_del
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.736 243708 INFO nova.virt.libvirt.driver [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Deletion of /var/lib/nova/instances/55c4c422-4f9d-419b-90e2-15b632b4b37b_del complete
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.774 243708 DEBUG nova.network.neutron [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Updating instance_info_cache with network_info: [{"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.794 243708 INFO nova.compute.manager [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Took 0.51 seconds to destroy the instance on the hypervisor.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.795 243708 DEBUG oslo.service.loopingcall [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.795 243708 DEBUG nova.compute.manager [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.796 243708 DEBUG nova.network.neutron [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.807 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Releasing lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.807 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Instance network_info: |[{"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.808 243708 DEBUG oslo_concurrency.lockutils [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.808 243708 DEBUG nova.network.neutron [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Refreshing network info cache for port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.810 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Start _get_guest_xml network_info=[{"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:14:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.817 243708 WARNING nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.826 243708 DEBUG nova.virt.libvirt.host [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.827 243708 DEBUG nova.virt.libvirt.host [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.831 243708 DEBUG nova.virt.libvirt.host [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.832 243708 DEBUG nova.virt.libvirt.host [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.832 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.832 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.833 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.833 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.833 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.833 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.834 243708 DEBUG nova.virt.hardware [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:14:42 compute-0 nova_compute[243704]: 2025-12-13 04:14:42.837 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:42 compute-0 ceph-mon[75071]: pgmap v1134: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 4.1 MiB/s wr, 174 op/s
Dec 13 04:14:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2531633479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1419539146' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.377 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.399 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.403 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 943 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.545 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.547 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.547 243708 INFO nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Creating image(s)
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.548 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.548 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Ensure instance console log exists: /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.549 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.549 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.549 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.551 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Start _get_guest_xml network_info=[{"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-03618eba-2360-492c-a5c9-a345a2a9f32c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '03618eba-2360-492c-a5c9-a345a2a9f32c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '34e6d510-9511-4913-b094-522edcf66b05', 'attached_at': '', 'detached_at': '', 'volume_id': '03618eba-2360-492c-a5c9-a345a2a9f32c', 'serial': '03618eba-2360-492c-a5c9-a345a2a9f32c'}, 'disk_bus': 'virtio', 'attachment_id': 'c54019be-d4d7-458b-ac6f-d39073f7ba32', 'device_type': 'disk', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.555 243708 WARNING nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.561 243708 DEBUG nova.virt.libvirt.host [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.561 243708 DEBUG nova.virt.libvirt.host [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.566 243708 DEBUG nova.virt.libvirt.host [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.566 243708 DEBUG nova.virt.libvirt.host [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.566 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.567 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.567 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.568 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.568 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.568 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.568 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.569 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.569 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.569 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.570 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.570 243708 DEBUG nova.virt.hardware [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.592 243708 DEBUG nova.storage.rbd_utils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 34e6d510-9511-4913-b094-522edcf66b05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.596 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.656 243708 DEBUG nova.network.neutron [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.696 243708 INFO nova.compute.manager [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Took 0.90 seconds to deallocate network for instance.
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.716 243708 DEBUG nova.compute.manager [req-b04743db-3ad4-4035-b0e8-95336dcec67b req-c76f17b1-80fb-42a8-889a-1402a5a9d5a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-deleted-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.801 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.802 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing instance network info cache due to event network-changed-fd18f992-6376-4850-a95a-3f4ad2cbe95c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.802 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.802 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.803 243708 DEBUG nova.network.neutron [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Refreshing network info cache for port fd18f992-6376-4850-a95a-3f4ad2cbe95c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.901 243708 INFO nova.compute.manager [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Took 0.20 seconds to detach 1 volumes for instance.
Dec 13 04:14:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2531633479' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1419539146' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.939 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.939 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/538554483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.976 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.978 243708 DEBUG nova.virt.libvirt.vif [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1328926646',display_name='tempest-VolumesActionsTest-instance-1328926646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1328926646',id=11,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-57s5l2y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:38Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=4b2c5a9d-6552-48bf-92c4-1032bd4d509b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.978 243708 DEBUG nova.network.os_vif_util [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.979 243708 DEBUG nova.network.os_vif_util [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.980 243708 DEBUG nova.objects.instance [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b2c5a9d-6552-48bf-92c4-1032bd4d509b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.993 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <uuid>4b2c5a9d-6552-48bf-92c4-1032bd4d509b</uuid>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <name>instance-0000000b</name>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesActionsTest-instance-1328926646</nova:name>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:14:42</nova:creationTime>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:user uuid="2848adac59524388ba4931e7afd46b47">tempest-VolumesActionsTest-1326765317-project-member</nova:user>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:project uuid="ae283283ca5a4a4281495561d7b0443a">tempest-VolumesActionsTest-1326765317</nova:project>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <nova:port uuid="6a19f0b6-14ea-4fee-b454-cf0d6746dc05">
Dec 13 04:14:43 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <system>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="serial">4b2c5a9d-6552-48bf-92c4-1032bd4d509b</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="uuid">4b2c5a9d-6552-48bf-92c4-1032bd4d509b</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </system>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <os>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </os>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <features>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </features>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk">
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config">
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:43 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:87:27:f6"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <target dev="tap6a19f0b6-14"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/console.log" append="off"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <video>
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </video>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:14:43 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:14:43 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:14:43 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:14:43 compute-0 nova_compute[243704]: </domain>
Dec 13 04:14:43 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.994 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Preparing to wait for external event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.994 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.995 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.995 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.996 243708 DEBUG nova.virt.libvirt.vif [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1328926646',display_name='tempest-VolumesActionsTest-instance-1328926646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1328926646',id=11,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-57s5l2y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:38Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=4b2c5a9d-6552-48bf-92c4-1032bd4d509b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.996 243708 DEBUG nova.network.os_vif_util [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.996 243708 DEBUG nova.network.os_vif_util [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.997 243708 DEBUG os_vif [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.997 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.998 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:43 compute-0 nova_compute[243704]: 2025-12-13 04:14:43.998 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.000 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.000 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a19f0b6-14, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.001 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a19f0b6-14, col_values=(('external_ids', {'iface-id': '6a19f0b6-14ea-4fee-b454-cf0d6746dc05', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:27:f6', 'vm-uuid': '4b2c5a9d-6552-48bf-92c4-1032bd4d509b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:44 compute-0 NetworkManager[48899]: <info>  [1765599284.0037] manager: (tap6a19f0b6-14): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.007 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.010 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.010 243708 INFO os_vif [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14')
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.023 243708 DEBUG oslo_concurrency.processutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.081 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.082 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.083 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] No VIF found with MAC fa:16:3e:87:27:f6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.083 243708 INFO nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Using config drive
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.108 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:44 compute-0 podman[259479]: 2025-12-13 04:14:44.112818813 +0000 UTC m=+0.063346055 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 04:14:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:14:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/933921531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.155 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.176 243708 DEBUG nova.virt.libvirt.vif [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-727355251',display_name='tempest-TestVolumeBootPattern-server-727355251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-727355251',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-nqezoix8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:37Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=34e6d510-9511-4913-b094-522edcf66b05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.177 243708 DEBUG nova.network.os_vif_util [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.178 243708 DEBUG nova.network.os_vif_util [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.179 243708 DEBUG nova.objects.instance [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 34e6d510-9511-4913-b094-522edcf66b05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.193 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <uuid>34e6d510-9511-4913-b094-522edcf66b05</uuid>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <name>instance-0000000a</name>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-server-727355251</nova:name>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:14:43</nova:creationTime>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <nova:port uuid="5a4345e6-0422-4f4a-affc-2c1023f05fe6">
Dec 13 04:14:44 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <system>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="serial">34e6d510-9511-4913-b094-522edcf66b05</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="uuid">34e6d510-9511-4913-b094-522edcf66b05</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </system>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <os>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </os>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <features>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </features>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/34e6d510-9511-4913-b094-522edcf66b05_disk.config">
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-03618eba-2360-492c-a5c9-a345a2a9f32c">
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </source>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:14:44 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <serial>03618eba-2360-492c-a5c9-a345a2a9f32c</serial>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:4d:ad:93"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <target dev="tap5a4345e6-04"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/console.log" append="off"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <video>
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </video>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:14:44 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:14:44 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:14:44 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:14:44 compute-0 nova_compute[243704]: </domain>
Dec 13 04:14:44 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.194 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Preparing to wait for external event network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.194 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.194 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.194 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.195 243708 DEBUG nova.virt.libvirt.vif [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-727355251',display_name='tempest-TestVolumeBootPattern-server-727355251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-727355251',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-nqezoix8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:14:37Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=34e6d510-9511-4913-b094-522edcf66b05,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.196 243708 DEBUG nova.network.os_vif_util [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.196 243708 DEBUG nova.network.os_vif_util [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.197 243708 DEBUG os_vif [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.197 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.198 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.198 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.201 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.201 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5a4345e6-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.202 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5a4345e6-04, col_values=(('external_ids', {'iface-id': '5a4345e6-0422-4f4a-affc-2c1023f05fe6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:ad:93', 'vm-uuid': '34e6d510-9511-4913-b094-522edcf66b05'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:44 compute-0 NetworkManager[48899]: <info>  [1765599284.2051] manager: (tap5a4345e6-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.204 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.208 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.213 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.213 243708 INFO os_vif [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04')
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.218 243708 DEBUG nova.network.neutron [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.261 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.263 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.263 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:4d:ad:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.264 243708 INFO nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Using config drive
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.288 243708 DEBUG nova.storage.rbd_utils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 34e6d510-9511-4913-b094-522edcf66b05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.298 243708 DEBUG nova.network.neutron [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Updated VIF entry in instance network info cache for port 5a4345e6-0422-4f4a-affc-2c1023f05fe6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.299 243708 DEBUG nova.network.neutron [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Updating instance_info_cache with network_info: [{"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.330 243708 DEBUG oslo_concurrency.lockutils [req-f8de53f1-27c0-4b8e-a218-9eea0cecf65b req-3f3a70c4-5d6c-4890-918f-3042bce2470e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-34e6d510-9511-4913-b094-522edcf66b05" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.519 243708 INFO nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Creating config drive at /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.524 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdaxwejt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.587 243708 DEBUG nova.network.neutron [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/405817042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.606 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-55c4c422-4f9d-419b-90e2-15b632b4b37b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.606 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-unplugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.607 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.607 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.607 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.607 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] No waiting events found dispatching network-vif-unplugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-unplugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG oslo_concurrency.lockutils [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.608 243708 DEBUG nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] No waiting events found dispatching network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.609 243708 WARNING nova.compute.manager [req-0a458e1b-2f8f-47a2-a1f5-12d80c7d50b5 req-a0727776-2067-492f-a93b-c38905e9a9fe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Received unexpected event network-vif-plugged-fd18f992-6376-4850-a95a-3f4ad2cbe95c for instance with vm_state active and task_state deleting.
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.626 243708 DEBUG oslo_concurrency.processutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.634 243708 DEBUG nova.compute.provider_tree [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.646 243708 DEBUG nova.scheduler.client.report [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.654 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvdaxwejt" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.683 243708 DEBUG nova.storage.rbd_utils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] rbd image 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.687 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:14:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.6 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4148 syncs, 3.35 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5426 writes, 15K keys, 5426 commit groups, 1.0 writes per commit group, ingest: 12.42 MB, 0.02 MB/s
                                           Interval WAL: 5426 writes, 2344 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.716 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.764 243708 INFO nova.scheduler.client.report [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Deleted allocations for instance 55c4c422-4f9d-419b-90e2-15b632b4b37b
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.774 243708 INFO nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Creating config drive at /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.778 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplyu0xmzr execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.824 243708 DEBUG oslo_concurrency.processutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config 4b2c5a9d-6552-48bf-92c4-1032bd4d509b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.824 243708 INFO nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Deleting local config drive /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b/disk.config because it was imported into RBD.
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.876 243708 DEBUG oslo_concurrency.lockutils [None req-30e7da41-5520-4c25-9c04-95650deb5bc1 873a37f2f9d84afe9b5a4fe8861d0832 cd7324f82be24328bd8a9643cc9032d8 - - default default] Lock "55c4c422-4f9d-419b-90e2-15b632b4b37b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:44 compute-0 systemd-udevd[259281]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:44 compute-0 NetworkManager[48899]: <info>  [1765599284.8847] manager: (tap6a19f0b6-14): new Tun device (/org/freedesktop/NetworkManager/Devices/68)
Dec 13 04:14:44 compute-0 kernel: tap6a19f0b6-14: entered promiscuous mode
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.890 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 ovn_controller[145204]: 2025-12-13T04:14:44Z|00103|binding|INFO|Claiming lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for this chassis.
Dec 13 04:14:44 compute-0 ovn_controller[145204]: 2025-12-13T04:14:44Z|00104|binding|INFO|6a19f0b6-14ea-4fee-b454-cf0d6746dc05: Claiming fa:16:3e:87:27:f6 10.100.0.12
Dec 13 04:14:44 compute-0 NetworkManager[48899]: <info>  [1765599284.9005] device (tap6a19f0b6-14): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:14:44 compute-0 NetworkManager[48899]: <info>  [1765599284.9014] device (tap6a19f0b6-14): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.901 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:27:f6 10.100.0.12'], port_security=['fa:16:3e:87:27:f6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b2c5a9d-6552-48bf-92c4-1032bd4d509b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6a19f0b6-14ea-4fee-b454-cf0d6746dc05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.902 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 bound to our chassis
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.904 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:44 compute-0 ovn_controller[145204]: 2025-12-13T04:14:44Z|00105|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 ovn-installed in OVS
Dec 13 04:14:44 compute-0 ovn_controller[145204]: 2025-12-13T04:14:44Z|00106|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 up in Southbound
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.914 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.916 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplyu0xmzr" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.920 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9390ca-d9ed-4dc0-8a85-5d8d08247c41]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.921 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb949b294-41 in ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.924 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb949b294-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.924 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ad52a152-1506-42f1-aaf3-4aed2073076b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:44 compute-0 systemd-machined[206767]: New machine qemu-10-instance-0000000b.
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.926 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[936b3985-e0fe-4537-a5a1-0e549dfdddfa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:44 compute-0 ceph-mon[75071]: pgmap v1135: 305 pgs: 305 active+clean; 332 MiB data, 365 MiB used, 60 GiB / 60 GiB avail; 943 KiB/s rd, 3.6 MiB/s wr, 153 op/s
Dec 13 04:14:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/538554483' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/933921531' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:14:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/405817042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:44 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000b.
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.942 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[782684da-15e8-4431-95d7-c699020e98d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.953 243708 DEBUG nova.storage.rbd_utils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 34e6d510-9511-4913-b094-522edcf66b05_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.958 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config 34e6d510-9511-4913-b094-522edcf66b05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:44 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:44.968 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d56202bd-c353-42f3-9e69-883890a26553]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:44 compute-0 nova_compute[243704]: 2025-12-13 04:14:44.980 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.002 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[f46e0e30-c0dc-428f-9e70-f863813f10e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.0116] manager: (tapb949b294-40): new Veth device (/org/freedesktop/NetworkManager/Devices/69)
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.010 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[02bbe2f4-0535-4973-b177-7dd7c28a0aaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.045 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[e42704b6-7440-4950-b4f1-5ff89864fd25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.048 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[7c294784-7fa6-46b9-a361-2264366685e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.0745] device (tapb949b294-40): carrier: link connected
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.080 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[5f9be4ee-d21f-4cd4-ae30-9b6902c0b777]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.091 243708 DEBUG oslo_concurrency.processutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config 34e6d510-9511-4913-b094-522edcf66b05_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.092 243708 INFO nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Deleting local config drive /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05/disk.config because it was imported into RBD.
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.098 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b852dc25-c7bd-48e3-887c-896ed3053d42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb949b294-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:86:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402375, 'reachable_time': 18000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259692, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.118 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9da6516c-8015-4cd0-a36d-0e55e30d470d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe3e:86b9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402375, 'tstamp': 402375}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259694, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.135 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d058ac56-f901-4cfa-9f7b-8bf63d24294e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb949b294-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:3e:86:b9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 39], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402375, 'reachable_time': 18000, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259699, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.1379] manager: (tap5a4345e6-04): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Dec 13 04:14:45 compute-0 kernel: tap5a4345e6-04: entered promiscuous mode
Dec 13 04:14:45 compute-0 systemd-udevd[259676]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:45 compute-0 ovn_controller[145204]: 2025-12-13T04:14:45Z|00107|binding|INFO|Claiming lport 5a4345e6-0422-4f4a-affc-2c1023f05fe6 for this chassis.
Dec 13 04:14:45 compute-0 ovn_controller[145204]: 2025-12-13T04:14:45Z|00108|binding|INFO|5a4345e6-0422-4f4a-affc-2c1023f05fe6: Claiming fa:16:3e:4d:ad:93 10.100.0.7
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.142 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.147 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ad:93 10.100.0.7'], port_security=['fa:16:3e:4d:ad:93 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34e6d510-9511-4913-b094-522edcf66b05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a51e76ad-e401-4d68-b2f5-a9d28269b3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=5a4345e6-0422-4f4a-affc-2c1023f05fe6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.1496] device (tap5a4345e6-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.1506] device (tap5a4345e6-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:14:45 compute-0 ovn_controller[145204]: 2025-12-13T04:14:45Z|00109|binding|INFO|Setting lport 5a4345e6-0422-4f4a-affc-2c1023f05fe6 ovn-installed in OVS
Dec 13 04:14:45 compute-0 ovn_controller[145204]: 2025-12-13T04:14:45Z|00110|binding|INFO|Setting lport 5a4345e6-0422-4f4a-affc-2c1023f05fe6 up in Southbound
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.162 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 systemd-machined[206767]: New machine qemu-11-instance-0000000a.
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.167 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000a.
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.184 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599270.18352, 0ef2f9af-02e7-4df3-860b-d86160b330eb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.185 243708 INFO nova.compute.manager [-] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] VM Stopped (Lifecycle Event)
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.184 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad8fe11-2fb5-4884-9258-a9936558447b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.205 243708 DEBUG nova.compute.manager [None req-133944b3-e23e-4b41-a214-b803572b944f - - - - - -] [instance: 0ef2f9af-02e7-4df3-860b-d86160b330eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.258 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[14914abb-62a2-42fa-86e3-a61268be43ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.260 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb949b294-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.260 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.260 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb949b294-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:45 compute-0 kernel: tapb949b294-40: entered promiscuous mode
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.2631] manager: (tapb949b294-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.265 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb949b294-40, col_values=(('external_ids', {'iface-id': 'ea765a2c-5afd-40d3-8875-8e9014d19426'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.262 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 ovn_controller[145204]: 2025-12-13T04:14:45Z|00111|binding|INFO|Releasing lport ea765a2c-5afd-40d3-8875-8e9014d19426 from this chassis (sb_readonly=0)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.270 243708 DEBUG nova.network.neutron [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Updated VIF entry in instance network info cache for port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.270 243708 DEBUG nova.network.neutron [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Updating instance_info_cache with network_info: [{"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.283 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.283 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.284 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9a19aa54-7c34-4797-9bf6-17fcd687bdf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.285 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/b949b294-460a-4397-aeb7-2ff487ba5063.pid.haproxy
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID b949b294-460a-4397-aeb7-2ff487ba5063
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.286 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'env', 'PROCESS_TAG=haproxy-b949b294-460a-4397-aeb7-2ff487ba5063', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b949b294-460a-4397-aeb7-2ff487ba5063.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.287 243708 DEBUG oslo_concurrency.lockutils [req-d3804771-ec36-4575-ad7b-3acdcf95172e req-559974c6-95fd-4a07-b9a3-80b545b91c8b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-4b2c5a9d-6552-48bf-92c4-1032bd4d509b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.390 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.3903246, 4b2c5a9d-6552-48bf-92c4-1032bd4d509b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.391 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] VM Started (Lifecycle Event)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.405 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.410 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.390445, 4b2c5a9d-6552-48bf-92c4-1032bd4d509b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.410 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] VM Paused (Lifecycle Event)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.427 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.432 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.452 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 969 KiB/s rd, 4.8 MiB/s wr, 191 op/s
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.580 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.580164, 34e6d510-9511-4913-b094-522edcf66b05 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.581 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] VM Started (Lifecycle Event)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.595 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.599 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.5802772, 34e6d510-9511-4913-b094-522edcf66b05 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.600 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] VM Paused (Lifecycle Event)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.617 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.622 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.641 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:45 compute-0 podman[259830]: 2025-12-13 04:14:45.670167945 +0000 UTC m=+0.047860136 container create 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:14:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1669919899' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1669919899' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:45 compute-0 systemd[1]: Started libpod-conmon-426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65.scope.
Dec 13 04:14:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/043f2e985487f2e72737c3526a32e98cb7e94e8a695d0e914bfc17f456900993/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:45 compute-0 podman[259830]: 2025-12-13 04:14:45.646177295 +0000 UTC m=+0.023869496 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:14:45 compute-0 podman[259830]: 2025-12-13 04:14:45.742251454 +0000 UTC m=+0.119943645 container init 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec 13 04:14:45 compute-0 podman[259830]: 2025-12-13 04:14:45.747533257 +0000 UTC m=+0.125225438 container start 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:14:45 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [NOTICE]   (259850) : New worker (259852) forked
Dec 13 04:14:45 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [NOTICE]   (259850) : Loading success.
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.801 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 5a4345e6-0422-4f4a-affc-2c1023f05fe6 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.804 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.815 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[eafc541d-c62b-493c-84aa-579b56cd1fcb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.815 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc553cd2-51 in ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.817 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc553cd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.817 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdf84ff-98b1-46a0-81e1-6eb52792ce03]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.818 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[13fa917b-ef47-4d34-9fa6-bfa2195a784c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.829 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[73a9e8fa-bdf4-44ee-8d2f-006e037d1f2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.852 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[72526e43-22d3-43b1-ba3a-eb73ef105ea4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.882 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[08386838-2bd1-4cf7-9f6f-28430adf2515]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.889 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2cb67218-0056-41a4-9ffc-f9b17b0be26a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 systemd-udevd[259806]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.8915] manager: (tapfc553cd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/72)
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.906 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.906 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.907 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.912 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.913 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Processing event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.913 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.913 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.913 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.913 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.914 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.914 243708 WARNING nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received unexpected event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with vm_state building and task_state spawning.
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.914 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Received event network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.914 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.915 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.915 243708 DEBUG oslo_concurrency.lockutils [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.915 243708 DEBUG nova.compute.manager [req-d01a0cc1-56c5-47b6-a629-ce1fc1213ad2 req-5aac0467-9337-48d7-acad-39b9b3905554 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Processing event network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.916 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.916 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.922 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[ceb77b6d-2c41-40be-814a-5813248982df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.921 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.9207714, 4b2c5a9d-6552-48bf-92c4-1032bd4d509b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.921 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] VM Resumed (Lifecycle Event)
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.925 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f508fc-3a5f-4b9f-a77a-c36072c9a779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.927 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.928 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:14:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e214 do_prune osdmap full prune enabled
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.942 243708 INFO nova.virt.libvirt.driver [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Instance spawned successfully.
Dec 13 04:14:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1669919899' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1669919899' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.943 243708 INFO nova.virt.libvirt.driver [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Instance spawned successfully.
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.946 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.948 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.951 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e215 e215: 3 total, 3 up, 3 in
Dec 13 04:14:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e215: 3 total, 3 up, 3 in
Dec 13 04:14:45 compute-0 NetworkManager[48899]: <info>  [1765599285.9594] device (tapfc553cd2-50): carrier: link connected
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.963 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.966 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[ad18a903-029c-47e3-83fa-4ca301554716]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.978 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.979 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.980 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.980 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.981 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.981 243708 DEBUG nova.virt.libvirt.driver [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.986 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.987 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599285.9208696, 34e6d510-9511-4913-b094-522edcf66b05 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:45.987 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8d3c5111-d856-4096-a505-2f9cb170ebb5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402463, 'reachable_time': 23149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 259886, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.987 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] VM Resumed (Lifecycle Event)
Dec 13 04:14:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e215 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.991 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.991 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.992 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.992 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.993 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:45 compute-0 nova_compute[243704]: 2025-12-13 04:14:45.993 243708 DEBUG nova.virt.libvirt.driver [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.008 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a46bee96-688c-41e1-9e7e-af79e25b25fb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:ae9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 402463, 'tstamp': 402463}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 259887, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.024 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.029 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.030 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ad5e7ae7-2dda-4891-a75b-281460f1d42f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402463, 'reachable_time': 23149, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 259888, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.040 243708 INFO nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Took 2.49 seconds to spawn the instance on the hypervisor.
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.041 243708 DEBUG nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.069 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c8088298-45fb-408b-8c17-b0419bc81af4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.069 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.111 243708 INFO nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Took 7.53 seconds to spawn the instance on the hypervisor.
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.111 243708 DEBUG nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.114 243708 INFO nova.compute.manager [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Took 9.44 seconds to build instance.
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.141 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2d376a83-b8ee-4582-8a0f-ff07d66e76b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.144 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.144 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.145 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:46 compute-0 kernel: tapfc553cd2-50: entered promiscuous mode
Dec 13 04:14:46 compute-0 NetworkManager[48899]: <info>  [1765599286.1479] manager: (tapfc553cd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.155 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:46 compute-0 ovn_controller[145204]: 2025-12-13T04:14:46Z|00112|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.162 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.162 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5d624d83-20fc-416c-ade3-0783b7df85f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.163 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:14:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:46.164 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'env', 'PROCESS_TAG=haproxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.166 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.169 243708 DEBUG oslo_concurrency.lockutils [None req-c9a3326b-df8a-4ccf-a582-f850d8210341 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.178 243708 INFO nova.compute.manager [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Took 8.86 seconds to build instance.
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.180 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:46 compute-0 nova_compute[243704]: 2025-12-13 04:14:46.195 243708 DEBUG oslo_concurrency.lockutils [None req-2a3fb31b-bb63-41dd-8f21-d26bce30a737 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.922s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/476872738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/476872738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:46 compute-0 podman[259920]: 2025-12-13 04:14:46.568374068 +0000 UTC m=+0.054944726 container create 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:14:46 compute-0 systemd[1]: Started libpod-conmon-83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582.scope.
Dec 13 04:14:46 compute-0 podman[259920]: 2025-12-13 04:14:46.543007222 +0000 UTC m=+0.029577900 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:14:46 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:14:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d769c3471fcc515a11556b271d47eaee48499f0b76336212b3258b081f1c044/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:14:46 compute-0 podman[259920]: 2025-12-13 04:14:46.701992702 +0000 UTC m=+0.188563390 container init 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:14:46 compute-0 podman[259920]: 2025-12-13 04:14:46.711316374 +0000 UTC m=+0.197887032 container start 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:14:46 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [NOTICE]   (259939) : New worker (259941) forked
Dec 13 04:14:46 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [NOTICE]   (259939) : Loading success.
Dec 13 04:14:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e215 do_prune osdmap full prune enabled
Dec 13 04:14:46 compute-0 ceph-mon[75071]: pgmap v1136: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 969 KiB/s rd, 4.8 MiB/s wr, 191 op/s
Dec 13 04:14:46 compute-0 ceph-mon[75071]: osdmap e215: 3 total, 3 up, 3 in
Dec 13 04:14:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/476872738' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/476872738' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e216 e216: 3 total, 3 up, 3 in
Dec 13 04:14:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e216: 3 total, 3 up, 3 in
Dec 13 04:14:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4108115516' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4108115516' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.486 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.504 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.504 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.505 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.505 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.505 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.506 243708 INFO nova.compute.manager [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Terminating instance
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.507 243708 DEBUG nova.compute.manager [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:14:47 compute-0 kernel: tap6a19f0b6-14 (unregistering): left promiscuous mode
Dec 13 04:14:47 compute-0 NetworkManager[48899]: <info>  [1765599287.5448] device (tap6a19f0b6-14): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00113|binding|INFO|Releasing lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 from this chassis (sb_readonly=0)
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00114|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 down in Southbound
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.552 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00115|binding|INFO|Removing iface tap6a19f0b6-14 ovn-installed in OVS
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.555 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.560 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:27:f6 10.100.0.12'], port_security=['fa:16:3e:87:27:f6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b2c5a9d-6552-48bf-92c4-1032bd4d509b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6a19f0b6-14ea-4fee-b454-cf0d6746dc05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.563 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 unbound from our chassis
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.566 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b949b294-460a-4397-aeb7-2ff487ba5063, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.567 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4f090f45-12bf-4aee-96e7-d89d8f7e7e4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.567 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 namespace which is not needed anymore
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.586 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Dec 13 04:14:47 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000b.scope: Consumed 2.050s CPU time.
Dec 13 04:14:47 compute-0 systemd-machined[206767]: Machine qemu-10-instance-0000000b terminated.
Dec 13 04:14:47 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [NOTICE]   (259850) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:47 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [NOTICE]   (259850) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:47 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [WARNING]  (259850) : Exiting Master process...
Dec 13 04:14:47 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [ALERT]    (259850) : Current worker (259852) exited with code 143 (Terminated)
Dec 13 04:14:47 compute-0 neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063[259846]: [WARNING]  (259850) : All workers exited. Exiting... (0)
Dec 13 04:14:47 compute-0 systemd[1]: libpod-426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65.scope: Deactivated successfully.
Dec 13 04:14:47 compute-0 podman[259972]: 2025-12-13 04:14:47.710973862 +0000 UTC m=+0.050378683 container died 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:14:47 compute-0 kernel: tap6a19f0b6-14: entered promiscuous mode
Dec 13 04:14:47 compute-0 NetworkManager[48899]: <info>  [1765599287.7244] manager: (tap6a19f0b6-14): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Dec 13 04:14:47 compute-0 systemd-udevd[259876]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00116|binding|INFO|Claiming lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for this chassis.
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00117|binding|INFO|6a19f0b6-14ea-4fee-b454-cf0d6746dc05: Claiming fa:16:3e:87:27:f6 10.100.0.12
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.725 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 kernel: tap6a19f0b6-14 (unregistering): left promiscuous mode
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.740 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:27:f6 10.100.0.12'], port_security=['fa:16:3e:87:27:f6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b2c5a9d-6552-48bf-92c4-1032bd4d509b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6a19f0b6-14ea-4fee-b454-cf0d6746dc05) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.748 243708 INFO nova.virt.libvirt.driver [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Instance destroyed successfully.
Dec 13 04:14:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-043f2e985487f2e72737c3526a32e98cb7e94e8a695d0e914bfc17f456900993-merged.mount: Deactivated successfully.
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.749 243708 DEBUG nova.objects.instance [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lazy-loading 'resources' on Instance uuid 4b2c5a9d-6552-48bf-92c4-1032bd4d509b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00118|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 ovn-installed in OVS
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00119|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 up in Southbound
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.754 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00120|binding|INFO|Releasing lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 from this chassis (sb_readonly=1)
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00121|if_status|INFO|Not setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 down as sb is readonly
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00122|binding|INFO|Removing iface tap6a19f0b6-14 ovn-installed in OVS
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.757 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.760 243708 DEBUG nova.virt.libvirt.vif [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:14:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesActionsTest-instance-1328926646',display_name='tempest-VolumesActionsTest-instance-1328926646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesactionstest-instance-1328926646',id=11,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ae283283ca5a4a4281495561d7b0443a',ramdisk_id='',reservation_id='r-57s5l2y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesActionsTest-1326765317',owner_user_name='tempest-VolumesActionsTest-1326765317-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:14:46Z,user_data=None,user_id='2848adac59524388ba4931e7afd46b47',uuid=4b2c5a9d-6552-48bf-92c4-1032bd4d509b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.761 243708 DEBUG nova.network.os_vif_util [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converting VIF {"id": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "address": "fa:16:3e:87:27:f6", "network": {"id": "b949b294-460a-4397-aeb7-2ff487ba5063", "bridge": "br-int", "label": "tempest-VolumesActionsTest-68092595-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ae283283ca5a4a4281495561d7b0443a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a19f0b6-14", "ovs_interfaceid": "6a19f0b6-14ea-4fee-b454-cf0d6746dc05", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.762 243708 DEBUG nova.network.os_vif_util [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00123|binding|INFO|Releasing lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 from this chassis (sb_readonly=0)
Dec 13 04:14:47 compute-0 ovn_controller[145204]: 2025-12-13T04:14:47Z|00124|binding|INFO|Setting lport 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 down in Southbound
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.763 243708 DEBUG os_vif [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:14:47 compute-0 podman[259972]: 2025-12-13 04:14:47.764527711 +0000 UTC m=+0.103932532 container cleanup 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.765 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.765 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a19f0b6-14, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.767 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.769 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.774 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.776 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:27:f6 10.100.0.12'], port_security=['fa:16:3e:87:27:f6 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4b2c5a9d-6552-48bf-92c4-1032bd4d509b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b949b294-460a-4397-aeb7-2ff487ba5063', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ae283283ca5a4a4281495561d7b0443a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8f4a9bce-45f1-49ad-8778-c6874894549e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=90c947fe-e086-4707-9d7a-e7ae8b4033d2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=6a19f0b6-14ea-4fee-b454-cf0d6746dc05) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:47 compute-0 systemd[1]: libpod-conmon-426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65.scope: Deactivated successfully.
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.776 243708 INFO os_vif [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:27:f6,bridge_name='br-int',has_traffic_filtering=True,id=6a19f0b6-14ea-4fee-b454-cf0d6746dc05,network=Network(b949b294-460a-4397-aeb7-2ff487ba5063),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a19f0b6-14')
Dec 13 04:14:47 compute-0 podman[260004]: 2025-12-13 04:14:47.835611904 +0000 UTC m=+0.049874530 container remove 426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.843 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[903c4aa3-dd05-443d-bec5-4abf1515fe09]: (4, ('Sat Dec 13 04:14:47 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 (426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65)\n426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65\nSat Dec 13 04:14:47 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 (426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65)\n426b1520b94341dbdbf56b0d0d826ebe4d5fa0455ebaa3c0a69e1de88b392c65\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.845 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5e30b6-690b-47c4-a264-9c435c15cdbd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.846 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb949b294-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:47 compute-0 kernel: tapb949b294-40: left promiscuous mode
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.849 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.853 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[97df8145-8a26-4206-9bd6-104c5c8988d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 nova_compute[243704]: 2025-12-13 04:14:47.865 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.871 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1e024c18-ad88-4b85-86b7-9689cdf4e690]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.872 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[186fd299-ee87-4261-8e99-46d867cfe2ca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.893 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[37e70a81-29ba-4a44-93bb-6a3b239cb850]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402367, 'reachable_time': 44580, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260037, 'error': None, 'target': 'ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 systemd[1]: run-netns-ovnmeta\x2db949b294\x2d460a\x2d4397\x2daeb7\x2d2ff487ba5063.mount: Deactivated successfully.
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.897 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b949b294-460a-4397-aeb7-2ff487ba5063 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.897 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[29e34bcf-cfda-42de-b8a7-1ac172e2e267]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.898 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 unbound from our chassis
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.900 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b949b294-460a-4397-aeb7-2ff487ba5063, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.901 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[04b15121-08b1-428c-8684-a2c2889f059d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.902 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 6a19f0b6-14ea-4fee-b454-cf0d6746dc05 in datapath b949b294-460a-4397-aeb7-2ff487ba5063 unbound from our chassis
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.903 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b949b294-460a-4397-aeb7-2ff487ba5063, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:47.904 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e0016f16-24ac-4ce4-8acd-a883c82e353d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:47 compute-0 ceph-mon[75071]: osdmap e216: 3 total, 3 up, 3 in
Dec 13 04:14:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4108115516' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4108115516' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:47 compute-0 ceph-mon[75071]: pgmap v1139: 305 pgs: 305 active+clean; 362 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 1.9 MiB/s wr, 55 op/s
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.004 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Received event network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.005 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.005 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.005 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.006 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] No waiting events found dispatching network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.006 243708 WARNING nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Received unexpected event network-vif-plugged-5a4345e6-0422-4f4a-affc-2c1023f05fe6 for instance with vm_state active and task_state None.
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.007 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.007 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.008 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.008 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.008 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.008 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.009 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.009 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.009 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.010 243708 DEBUG oslo_concurrency.lockutils [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.010 243708 DEBUG nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.010 243708 WARNING nova.compute.manager [req-ae9ba275-93fc-4288-8304-03dc6e269130 req-6ba7d517-e54b-4a1a-aa81-184259e2a9a2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received unexpected event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with vm_state active and task_state deleting.
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.014 243708 INFO nova.virt.libvirt.driver [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Deleting instance files /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b_del
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.015 243708 INFO nova.virt.libvirt.driver [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Deletion of /var/lib/nova/instances/4b2c5a9d-6552-48bf-92c4-1032bd4d509b_del complete
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.059 243708 INFO nova.compute.manager [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Took 0.55 seconds to destroy the instance on the hypervisor.
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.060 243708 DEBUG oslo.service.loopingcall [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.060 243708 DEBUG nova.compute.manager [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.060 243708 DEBUG nova.network.neutron [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.131 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.131 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.132 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "34e6d510-9511-4913-b094-522edcf66b05-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.132 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.132 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.134 243708 INFO nova.compute.manager [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Terminating instance
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.135 243708 DEBUG nova.compute.manager [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:14:48 compute-0 kernel: tap5a4345e6-04 (unregistering): left promiscuous mode
Dec 13 04:14:48 compute-0 NetworkManager[48899]: <info>  [1765599288.1782] device (tap5a4345e6-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:14:48 compute-0 ovn_controller[145204]: 2025-12-13T04:14:48Z|00125|binding|INFO|Releasing lport 5a4345e6-0422-4f4a-affc-2c1023f05fe6 from this chassis (sb_readonly=0)
Dec 13 04:14:48 compute-0 ovn_controller[145204]: 2025-12-13T04:14:48Z|00126|binding|INFO|Setting lport 5a4345e6-0422-4f4a-affc-2c1023f05fe6 down in Southbound
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.183 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 ovn_controller[145204]: 2025-12-13T04:14:48Z|00127|binding|INFO|Removing iface tap5a4345e6-04 ovn-installed in OVS
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.192 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:ad:93 10.100.0.7'], port_security=['fa:16:3e:4d:ad:93 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '34e6d510-9511-4913-b094-522edcf66b05', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a51e76ad-e401-4d68-b2f5-a9d28269b3e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=5a4345e6-0422-4f4a-affc-2c1023f05fe6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.195 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 5a4345e6-0422-4f4a-affc-2c1023f05fe6 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.197 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.198 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2a19297e-ef68-4bdc-80c4-91fd82eaf91e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.199 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace which is not needed anymore
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.203 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Dec 13 04:14:48 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000a.scope: Consumed 2.763s CPU time.
Dec 13 04:14:48 compute-0 systemd-machined[206767]: Machine qemu-11-instance-0000000a terminated.
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [NOTICE]   (259939) : haproxy version is 2.8.14-c23fe91
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [NOTICE]   (259939) : path to executable is /usr/sbin/haproxy
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [WARNING]  (259939) : Exiting Master process...
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [WARNING]  (259939) : Exiting Master process...
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [ALERT]    (259939) : Current worker (259941) exited with code 143 (Terminated)
Dec 13 04:14:48 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[259935]: [WARNING]  (259939) : All workers exited. Exiting... (0)
Dec 13 04:14:48 compute-0 systemd[1]: libpod-83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582.scope: Deactivated successfully.
Dec 13 04:14:48 compute-0 podman[260060]: 2025-12-13 04:14:48.343295635 +0000 UTC m=+0.043651662 container died 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:14:48 compute-0 NetworkManager[48899]: <info>  [1765599288.3538] manager: (tap5a4345e6-04): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.355 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.360 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.371 243708 INFO nova.virt.libvirt.driver [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Instance destroyed successfully.
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.372 243708 DEBUG nova.objects.instance [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 34e6d510-9511-4913-b094-522edcf66b05 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582-userdata-shm.mount: Deactivated successfully.
Dec 13 04:14:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2d769c3471fcc515a11556b271d47eaee48499f0b76336212b3258b081f1c044-merged.mount: Deactivated successfully.
Dec 13 04:14:48 compute-0 podman[260060]: 2025-12-13 04:14:48.395781295 +0000 UTC m=+0.096137322 container cleanup 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:14:48 compute-0 systemd[1]: libpod-conmon-83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582.scope: Deactivated successfully.
Dec 13 04:14:48 compute-0 podman[260097]: 2025-12-13 04:14:48.457206427 +0000 UTC m=+0.040971470 container remove 83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.463 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7e2f3a80-5054-4057-85a2-b7812c97b3da]: (4, ('Sat Dec 13 04:14:48 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582)\n83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582\nSat Dec 13 04:14:48 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582)\n83cb9b7be09daff55a098ff9e28949ba2a17b21db45a21950de42be7a30a1582\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.465 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[16b964ce-8c90-473c-8e96-b539ab61ed23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.466 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.467 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 kernel: tapfc553cd2-50: left promiscuous mode
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.481 243708 DEBUG nova.virt.libvirt.vif [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:14:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-727355251',display_name='tempest-TestVolumeBootPattern-server-727355251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-727355251',id=10,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:14:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-nqezoix8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:14:46Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=34e6d510-9511-4913-b094-522edcf66b05,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.482 243708 DEBUG nova.network.os_vif_util [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "address": "fa:16:3e:4d:ad:93", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5a4345e6-04", "ovs_interfaceid": "5a4345e6-0422-4f4a-affc-2c1023f05fe6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.483 243708 DEBUG nova.network.os_vif_util [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.484 243708 DEBUG os_vif [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.485 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.485 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5a4345e6-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.488 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.489 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f65314ce-24d7-4e2a-8193-38a6e5e4ffbd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.494 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.500 243708 INFO os_vif [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:ad:93,bridge_name='br-int',has_traffic_filtering=True,id=5a4345e6-0422-4f4a-affc-2c1023f05fe6,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5a4345e6-04')
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.502 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b39cca68-c007-48b9-bf93-d9aae01ebcec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.503 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[87533984-b836-4162-a5d1-8ae32b01a768]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.525 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ed2d78fe-0a68-4c4f-b4f0-88525bb09154]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 402455, 'reachable_time': 17330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260126, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.528 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:14:48 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:48.528 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[f8533e9c-c2ea-44f4-bb18-0fa6f4dff739]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.628 243708 INFO nova.virt.libvirt.driver [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Deleting instance files /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05_del
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.628 243708 INFO nova.virt.libvirt.driver [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Deletion of /var/lib/nova/instances/34e6d510-9511-4913-b094-522edcf66b05_del complete
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.676 243708 INFO nova.compute.manager [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Took 0.54 seconds to destroy the instance on the hypervisor.
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.676 243708 DEBUG oslo.service.loopingcall [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.676 243708 DEBUG nova.compute.manager [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:14:48 compute-0 nova_compute[243704]: 2025-12-13 04:14:48.677 243708 DEBUG nova.network.neutron [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:14:48 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc553cd2\x2d5dd5\x2d4d87\x2d97af\x2d4b4eeb4ca0b0.mount: Deactivated successfully.
Dec 13 04:14:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.9 MiB/s wr, 435 op/s
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.085 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.086 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.086 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.086 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.086 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 WARNING nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received unexpected event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with vm_state active and task_state deleting.
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.087 243708 WARNING nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received unexpected event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with vm_state active and task_state deleting.
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG oslo_concurrency.lockutils [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.088 243708 DEBUG nova.compute.manager [req-1a601ca1-eb06-4d81-b251-25936e5bb0c7 req-e5b6fb23-fda5-4298-94c3-e96b6b7737e0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-unplugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:14:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:14:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.6 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2887 syncs, 3.55 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4565 writes, 14K keys, 4565 commit groups, 1.0 writes per commit group, ingest: 10.80 MB, 0.02 MB/s
                                           Interval WAL: 4565 writes, 1980 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:14:50 compute-0 ceph-mon[75071]: pgmap v1140: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 1.9 MiB/s wr, 435 op/s
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.761 243708 DEBUG nova.network.neutron [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.778 243708 INFO nova.compute.manager [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Took 2.10 seconds to deallocate network for instance.
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.808 243708 DEBUG nova.network.neutron [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.823 243708 INFO nova.compute.manager [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Took 2.76 seconds to deallocate network for instance.
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.869 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.870 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.939 243708 DEBUG oslo_concurrency.processutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.958 243708 INFO nova.compute.manager [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Took 0.18 seconds to detach 1 volumes for instance.
Dec 13 04:14:50 compute-0 nova_compute[243704]: 2025-12-13 04:14:50.960 243708 DEBUG nova.compute.manager [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Deleting volume: 03618eba-2360-492c-a5c9-a345a2a9f32c _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 13 04:14:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e216 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e216 do_prune osdmap full prune enabled
Dec 13 04:14:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e217 e217: 3 total, 3 up, 3 in
Dec 13 04:14:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e217: 3 total, 3 up, 3 in
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.139 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 59 KiB/s wr, 506 op/s
Dec 13 04:14:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219443981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.541 243708 DEBUG oslo_concurrency.processutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.602s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.549 243708 DEBUG nova.compute.provider_tree [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.563 243708 DEBUG nova.scheduler.client.report [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.605 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.608 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/815005453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/815005453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.667 243708 INFO nova.scheduler.client.report [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Deleted allocations for instance 4b2c5a9d-6552-48bf-92c4-1032bd4d509b
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.684 243708 DEBUG oslo_concurrency.processutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:14:51 compute-0 nova_compute[243704]: 2025-12-13 04:14:51.720 243708 DEBUG oslo_concurrency.lockutils [None req-836f6410-5d99-48b3-acd2-e1120eff0aca 2848adac59524388ba4931e7afd46b47 ae283283ca5a4a4281495561d7b0443a - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.216s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:52 compute-0 ceph-mon[75071]: osdmap e217: 3 total, 3 up, 3 in
Dec 13 04:14:52 compute-0 ceph-mon[75071]: pgmap v1142: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 59 KiB/s wr, 506 op/s
Dec 13 04:14:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1219443981' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/815005453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/815005453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:14:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3739586094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.197 243708 DEBUG nova.compute.manager [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.197 243708 DEBUG oslo_concurrency.lockutils [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.198 243708 DEBUG oslo_concurrency.lockutils [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.199 243708 DEBUG oslo_concurrency.lockutils [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "4b2c5a9d-6552-48bf-92c4-1032bd4d509b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.199 243708 DEBUG nova.compute.manager [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] No waiting events found dispatching network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.199 243708 WARNING nova.compute.manager [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received unexpected event network-vif-plugged-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 for instance with vm_state deleted and task_state None.
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.200 243708 DEBUG nova.compute.manager [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Received event network-vif-deleted-5a4345e6-0422-4f4a-affc-2c1023f05fe6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.200 243708 DEBUG nova.compute.manager [req-a1c24db7-8f06-49df-975d-be406c043b16 req-865fa212-38f3-4310-9c80-6c07f9ad193f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Received event network-vif-deleted-6a19f0b6-14ea-4fee-b454-cf0d6746dc05 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.217 243708 DEBUG oslo_concurrency.processutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.223 243708 DEBUG nova.compute.provider_tree [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.237 243708 DEBUG nova.scheduler.client.report [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.253 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.286 243708 INFO nova.scheduler.client.report [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 34e6d510-9511-4913-b094-522edcf66b05
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.346 243708 DEBUG oslo_concurrency.lockutils [None req-236b85f3-9df1-4e8a-bd68-1ac14a74e7e5 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "34e6d510-9511-4913-b094-522edcf66b05" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8073050485362436e-06 of space, bias 1.0, pg target 0.0005421915145608731 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0006982306914923519 of space, bias 1.0, pg target 0.20946920744770556 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 4.958243251134077e-07 of space, bias 1.0, pg target 0.0001487472975340223 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006660586061084566 of space, bias 1.0, pg target 0.19981758183253698 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.4980766016686965e-06 of space, bias 4.0, pg target 0.001797691922002436 quantized to 16 (current 16)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:14:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.489 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:52 compute-0 nova_compute[243704]: 2025-12-13 04:14:52.918 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3739586094' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:14:53 compute-0 nova_compute[243704]: 2025-12-13 04:14:53.286 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:53.361 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:14:53 compute-0 nova_compute[243704]: 2025-12-13 04:14:53.362 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:14:53.363 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:14:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 48 KiB/s wr, 404 op/s
Dec 13 04:14:53 compute-0 nova_compute[243704]: 2025-12-13 04:14:53.486 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e217 do_prune osdmap full prune enabled
Dec 13 04:14:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e218 e218: 3 total, 3 up, 3 in
Dec 13 04:14:54 compute-0 ceph-mon[75071]: pgmap v1143: 305 pgs: 305 active+clean; 134 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 48 KiB/s wr, 404 op/s
Dec 13 04:14:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e218: 3 total, 3 up, 3 in
Dec 13 04:14:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:14:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/386053473' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:14:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/386053473' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:54 compute-0 podman[260181]: 2025-12-13 04:14:54.964945703 +0000 UTC m=+0.103121951 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:14:55 compute-0 ceph-mon[75071]: osdmap e218: 3 total, 3 up, 3 in
Dec 13 04:14:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/386053473' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:14:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/386053473' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:14:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 46 KiB/s wr, 421 op/s
Dec 13 04:14:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.056116) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296056236, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2223, "num_deletes": 257, "total_data_size": 3357321, "memory_usage": 3415968, "flush_reason": "Manual Compaction"}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Dec 13 04:14:56 compute-0 ceph-mon[75071]: pgmap v1145: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 46 KiB/s wr, 421 op/s
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296074479, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3286038, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 21487, "largest_seqno": 23709, "table_properties": {"data_size": 3275981, "index_size": 6295, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 21962, "raw_average_key_size": 20, "raw_value_size": 3255396, "raw_average_value_size": 3094, "num_data_blocks": 279, "num_entries": 1052, "num_filter_entries": 1052, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599105, "oldest_key_time": 1765599105, "file_creation_time": 1765599296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 18432 microseconds, and 9332 cpu microseconds.
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.074546) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3286038 bytes OK
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.074594) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.076185) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.076205) EVENT_LOG_v1 {"time_micros": 1765599296076200, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.076229) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3347779, prev total WAL file size 3347779, number of live WAL files 2.
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.077286) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3209KB)], [50(7694KB)]
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296077472, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 11165130, "oldest_snapshot_seqno": -1}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5147 keys, 9351122 bytes, temperature: kUnknown
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296161000, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9351122, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9313513, "index_size": 23611, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12933, "raw_key_size": 126708, "raw_average_key_size": 24, "raw_value_size": 9217724, "raw_average_value_size": 1790, "num_data_blocks": 977, "num_entries": 5147, "num_filter_entries": 5147, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599296, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.161584) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9351122 bytes
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.164414) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 133.1 rd, 111.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 7.5 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5673, records dropped: 526 output_compression: NoCompression
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.164474) EVENT_LOG_v1 {"time_micros": 1765599296164451, "job": 26, "event": "compaction_finished", "compaction_time_micros": 83865, "compaction_time_cpu_micros": 44193, "output_level": 6, "num_output_files": 1, "total_output_size": 9351122, "num_input_records": 5673, "num_output_records": 5147, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296165542, "job": 26, "event": "table_file_deletion", "file_number": 52}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599296167628, "job": 26, "event": "table_file_deletion", "file_number": 50}
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.077079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.167764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.167773) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.167775) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.167777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:14:56.167779) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:14:57 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 04:14:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Dec 13 04:14:57 compute-0 nova_compute[243704]: 2025-12-13 04:14:57.492 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:57 compute-0 nova_compute[243704]: 2025-12-13 04:14:57.526 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599282.5247657, 55c4c422-4f9d-419b-90e2-15b632b4b37b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:14:57 compute-0 nova_compute[243704]: 2025-12-13 04:14:57.526 243708 INFO nova.compute.manager [-] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] VM Stopped (Lifecycle Event)
Dec 13 04:14:57 compute-0 nova_compute[243704]: 2025-12-13 04:14:57.561 243708 DEBUG nova.compute.manager [None req-c70745aa-22d7-4649-8f3f-b77af6d4bd52 - - - - - -] [instance: 55c4c422-4f9d-419b-90e2-15b632b4b37b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:14:58 compute-0 nova_compute[243704]: 2025-12-13 04:14:58.488 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:14:58 compute-0 ceph-mon[75071]: pgmap v1146: 305 pgs: 305 active+clean; 134 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 1.4 KiB/s wr, 41 op/s
Dec 13 04:14:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 31 MiB/s wr, 128 op/s
Dec 13 04:15:00 compute-0 ceph-mon[75071]: pgmap v1147: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 31 MiB/s wr, 128 op/s
Dec 13 04:15:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e218 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e218 do_prune osdmap full prune enabled
Dec 13 04:15:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e219 e219: 3 total, 3 up, 3 in
Dec 13 04:15:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e219: 3 total, 3 up, 3 in
Dec 13 04:15:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:01.366 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1149: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 33 MiB/s wr, 136 op/s
Dec 13 04:15:02 compute-0 ceph-mon[75071]: osdmap e219: 3 total, 3 up, 3 in
Dec 13 04:15:02 compute-0 ceph-mon[75071]: pgmap v1149: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 33 MiB/s wr, 136 op/s
Dec 13 04:15:02 compute-0 nova_compute[243704]: 2025-12-13 04:15:02.495 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:02 compute-0 nova_compute[243704]: 2025-12-13 04:15:02.747 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599287.7462857, 4b2c5a9d-6552-48bf-92c4-1032bd4d509b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:02 compute-0 nova_compute[243704]: 2025-12-13 04:15:02.747 243708 INFO nova.compute.manager [-] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] VM Stopped (Lifecycle Event)
Dec 13 04:15:02 compute-0 nova_compute[243704]: 2025-12-13 04:15:02.816 243708 DEBUG nova.compute.manager [None req-b717db37-9e18-47ea-8805-6b2d9e653b72 - - - - - -] [instance: 4b2c5a9d-6552-48bf-92c4-1032bd4d509b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:03 compute-0 nova_compute[243704]: 2025-12-13 04:15:03.370 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599288.3691854, 34e6d510-9511-4913-b094-522edcf66b05 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:03 compute-0 nova_compute[243704]: 2025-12-13 04:15:03.370 243708 INFO nova.compute.manager [-] [instance: 34e6d510-9511-4913-b094-522edcf66b05] VM Stopped (Lifecycle Event)
Dec 13 04:15:03 compute-0 nova_compute[243704]: 2025-12-13 04:15:03.393 243708 DEBUG nova.compute.manager [None req-b2483447-012f-4853-8f31-18f1f03ba088 - - - - - -] [instance: 34e6d510-9511-4913-b094-522edcf66b05] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 MiB/s wr, 81 op/s
Dec 13 04:15:03 compute-0 nova_compute[243704]: 2025-12-13 04:15:03.490 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:04 compute-0 ceph-mon[75071]: pgmap v1150: 305 pgs: 305 active+clean; 352 MiB data, 503 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 28 MiB/s wr, 81 op/s
Dec 13 04:15:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1151: 305 pgs: 305 active+clean; 662 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 55 MiB/s wr, 149 op/s
Dec 13 04:15:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e219 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:06 compute-0 ceph-mon[75071]: pgmap v1151: 305 pgs: 305 active+clean; 662 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 55 MiB/s wr, 149 op/s
Dec 13 04:15:06 compute-0 podman[260207]: 2025-12-13 04:15:06.921132794 +0000 UTC m=+0.062534863 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:15:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4142409906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 305 active+clean; 662 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 55 MiB/s wr, 149 op/s
Dec 13 04:15:07 compute-0 nova_compute[243704]: 2025-12-13 04:15:07.497 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4142409906' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:08 compute-0 nova_compute[243704]: 2025-12-13 04:15:08.491 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e219 do_prune osdmap full prune enabled
Dec 13 04:15:08 compute-0 ceph-mon[75071]: pgmap v1152: 305 pgs: 305 active+clean; 662 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 55 MiB/s wr, 149 op/s
Dec 13 04:15:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e220 e220: 3 total, 3 up, 3 in
Dec 13 04:15:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e220: 3 total, 3 up, 3 in
Dec 13 04:15:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 64 MiB/s wr, 128 op/s
Dec 13 04:15:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e220 do_prune osdmap full prune enabled
Dec 13 04:15:10 compute-0 ceph-mon[75071]: osdmap e220: 3 total, 3 up, 3 in
Dec 13 04:15:10 compute-0 ceph-mon[75071]: pgmap v1154: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 64 MiB/s wr, 128 op/s
Dec 13 04:15:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e221 e221: 3 total, 3 up, 3 in
Dec 13 04:15:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e221: 3 total, 3 up, 3 in
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.899 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.899 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.915 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.987 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.988 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.998 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:15:10 compute-0 nova_compute[243704]: 2025-12-13 04:15:10.999 243708 INFO nova.compute.claims [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:15:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e221 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.119 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 67 MiB/s wr, 134 op/s
Dec 13 04:15:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/977039334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.680 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.688 243708 DEBUG nova.compute.provider_tree [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.712 243708 DEBUG nova.scheduler.client.report [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.764 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.765 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:15:11 compute-0 ceph-mon[75071]: osdmap e221: 3 total, 3 up, 3 in
Dec 13 04:15:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/977039334' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.811 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.811 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.827 243708 INFO nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.849 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:15:11 compute-0 nova_compute[243704]: 2025-12-13 04:15:11.938 243708 INFO nova.virt.block_device [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Booting with volume cfe26227-c363-4b90-a064-865a294ec0f3 at /dev/vda
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.409 243708 DEBUG nova.policy [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.484 243708 DEBUG os_brick.utils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.485 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.499 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.498 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.499 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[f2987cfd-8219-4d74-8a13-655f27398593]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.501 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.551 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.050s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.551 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[7191d62d-dba5-49ed-86ef-9f679fb8916c]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.553 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.561 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.562 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e97f6251-2057-458f-a090-81d60c3bbe39]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.563 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[3056b7bb-a075-4cf5-b482-ea895f58c639]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.564 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.592 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.595 243708 DEBUG os_brick.initiator.connectors.lightos [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.595 243708 DEBUG os_brick.initiator.connectors.lightos [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.595 243708 DEBUG os_brick.initiator.connectors.lightos [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.596 243708 DEBUG os_brick.utils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (111ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.596 243708 DEBUG nova.virt.block_device [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating existing volume attachment record: 659781e3-d4bb-4ce2-99f6-683f49d51c94 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:15:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e221 do_prune osdmap full prune enabled
Dec 13 04:15:12 compute-0 ceph-mon[75071]: pgmap v1156: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 67 MiB/s wr, 134 op/s
Dec 13 04:15:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e222 e222: 3 total, 3 up, 3 in
Dec 13 04:15:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e222: 3 total, 3 up, 3 in
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.896 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.896 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.896 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.896 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:15:12 compute-0 nova_compute[243704]: 2025-12-13 04:15:12.897 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/40122582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.429 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1158: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 41 MiB/s wr, 56 op/s
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.492 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2085571427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.619 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.621 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4519MB free_disk=59.988172308541834GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.621 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.622 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:13 compute-0 ceph-mon[75071]: osdmap e222: 3 total, 3 up, 3 in
Dec 13 04:15:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/40122582' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2085571427' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.898 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.898 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.899 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:15:13 compute-0 nova_compute[243704]: 2025-12-13 04:15:13.964 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.253 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.254 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.255 243708 INFO nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Creating image(s)
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.255 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.255 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Ensure instance console log exists: /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.256 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.256 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.257 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.482 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Successfully created port: fda256aa-ac14-4ec9-a507-3553417887b8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:15:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1050233522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.513 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.518 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.532 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.568 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:15:14 compute-0 nova_compute[243704]: 2025-12-13 04:15:14.568 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:14 compute-0 ceph-mon[75071]: pgmap v1158: 305 pgs: 305 active+clean; 910 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 41 MiB/s wr, 56 op/s
Dec 13 04:15:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1050233522' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:14 compute-0 podman[260300]: 2025-12-13 04:15:14.925951754 +0000 UTC m=+0.067516227 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=multipathd)
Dec 13 04:15:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 39 MiB/s wr, 88 op/s
Dec 13 04:15:15 compute-0 nova_compute[243704]: 2025-12-13 04:15:15.569 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:15 compute-0 nova_compute[243704]: 2025-12-13 04:15:15.570 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:15 compute-0 nova_compute[243704]: 2025-12-13 04:15:15.570 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:15 compute-0 nova_compute[243704]: 2025-12-13 04:15:15.995 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Successfully updated port: fda256aa-ac14-4ec9-a507-3553417887b8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.007 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.007 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.007 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:15:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e222 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e222 do_prune osdmap full prune enabled
Dec 13 04:15:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e223 e223: 3 total, 3 up, 3 in
Dec 13 04:15:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e223: 3 total, 3 up, 3 in
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.159 243708 DEBUG nova.compute.manager [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-changed-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.159 243708 DEBUG nova.compute.manager [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Refreshing instance network info cache due to event network-changed-fda256aa-ac14-4ec9-a507-3553417887b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.160 243708 DEBUG oslo_concurrency.lockutils [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.319 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:15:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3606721968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:16 compute-0 nova_compute[243704]: 2025-12-13 04:15:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:16 compute-0 ceph-mon[75071]: pgmap v1159: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 39 MiB/s wr, 88 op/s
Dec 13 04:15:16 compute-0 ceph-mon[75071]: osdmap e223: 3 total, 3 up, 3 in
Dec 13 04:15:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3606721968' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 25 MiB/s wr, 80 op/s
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.476 243708 DEBUG nova.network.neutron [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating instance_info_cache with network_info: [{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.489 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.489 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Instance network_info: |[{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.490 243708 DEBUG oslo_concurrency.lockutils [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.490 243708 DEBUG nova.network.neutron [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Refreshing network info cache for port fda256aa-ac14-4ec9-a507-3553417887b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.492 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Start _get_guest_xml network_info=[{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-cfe26227-c363-4b90-a064-865a294ec0f3', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'cfe26227-c363-4b90-a064-865a294ec0f3', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '49ec6453-af58-4bf0-89f5-4faf5d3a92c5', 'attached_at': '', 'detached_at': '', 'volume_id': 'cfe26227-c363-4b90-a064-865a294ec0f3', 'serial': 'cfe26227-c363-4b90-a064-865a294ec0f3'}, 'disk_bus': 'virtio', 'attachment_id': '659781e3-d4bb-4ce2-99f6-683f49d51c94', 'device_type': 'disk', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.498 243708 WARNING nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.503 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.514 243708 DEBUG nova.virt.libvirt.host [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.515 243708 DEBUG nova.virt.libvirt.host [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.519 243708 DEBUG nova.virt.libvirt.host [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.520 243708 DEBUG nova.virt.libvirt.host [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.521 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.521 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.522 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.523 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.523 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.524 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.524 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.525 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.525 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.525 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.526 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.526 243708 DEBUG nova.virt.hardware [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.559 243708 DEBUG nova.storage.rbd_utils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:17 compute-0 nova_compute[243704]: 2025-12-13 04:15:17.563 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e223 do_prune osdmap full prune enabled
Dec 13 04:15:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e224 e224: 3 total, 3 up, 3 in
Dec 13 04:15:17 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e224: 3 total, 3 up, 3 in
Dec 13 04:15:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/603173673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.085 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.120 243708 DEBUG nova.virt.libvirt.vif [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1783865714',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKg7NPorcUaCTyjQnIW37TGZvEszn6Z90F3FdW4GpknN1Dc3o5yalSwdp3VZdGKi0dyr27qdTMqXOX1N2njMKGjTxmHz8tCExce0u2AeVtyuttyfXlfvJnKOocQeay/Ncw==',key_name='tempest-keypair-492315221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-hli50d0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:15:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=49ec6453-af58-4bf0-89f5-4faf5d3a92c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.121 243708 DEBUG nova.network.os_vif_util [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.123 243708 DEBUG nova.network.os_vif_util [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.126 243708 DEBUG nova.objects.instance [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.146 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <uuid>49ec6453-af58-4bf0-89f5-4faf5d3a92c5</uuid>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <name>instance-0000000c</name>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-volume-backed-server-1783865714</nova:name>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:15:17</nova:creationTime>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <nova:port uuid="fda256aa-ac14-4ec9-a507-3553417887b8">
Dec 13 04:15:18 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <system>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="serial">49ec6453-af58-4bf0-89f5-4faf5d3a92c5</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="uuid">49ec6453-af58-4bf0-89f5-4faf5d3a92c5</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </system>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <os>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </os>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <features>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </features>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config">
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </source>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-cfe26227-c363-4b90-a064-865a294ec0f3">
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </source>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:15:18 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <serial>cfe26227-c363-4b90-a064-865a294ec0f3</serial>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:9f:1a:6e"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <target dev="tapfda256aa-ac"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/console.log" append="off"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <video>
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </video>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:15:18 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:15:18 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:15:18 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:15:18 compute-0 nova_compute[243704]: </domain>
Dec 13 04:15:18 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.149 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Preparing to wait for external event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.149 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.149 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.150 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.151 243708 DEBUG nova.virt.libvirt.vif [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1783865714',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKg7NPorcUaCTyjQnIW37TGZvEszn6Z90F3FdW4GpknN1Dc3o5yalSwdp3VZdGKi0dyr27qdTMqXOX1N2njMKGjTxmHz8tCExce0u2AeVtyuttyfXlfvJnKOocQeay/Ncw==',key_name='tempest-keypair-492315221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-hli50d0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:15:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=49ec6453-af58-4bf0-89f5-4faf5d3a92c5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.152 243708 DEBUG nova.network.os_vif_util [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.153 243708 DEBUG nova.network.os_vif_util [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.154 243708 DEBUG os_vif [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.155 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.156 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.157 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.161 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.161 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfda256aa-ac, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.162 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfda256aa-ac, col_values=(('external_ids', {'iface-id': 'fda256aa-ac14-4ec9-a507-3553417887b8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:1a:6e', 'vm-uuid': '49ec6453-af58-4bf0-89f5-4faf5d3a92c5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.199 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 NetworkManager[48899]: <info>  [1765599318.2013] manager: (tapfda256aa-ac): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.203 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.209 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.210 243708 INFO os_vif [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac')
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.254 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.255 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.255 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:9f:1a:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.256 243708 INFO nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Using config drive
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.280 243708 DEBUG nova.storage.rbd_utils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.564 243708 INFO nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Creating config drive at /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.571 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6f1ixbu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.698 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd6f1ixbu" returned: 0 in 0.127s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.721 243708 DEBUG nova.storage.rbd_utils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.725 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config 49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.831 243708 DEBUG oslo_concurrency.processutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config 49ec6453-af58-4bf0-89f5-4faf5d3a92c5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.832 243708 INFO nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Deleting local config drive /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5/disk.config because it was imported into RBD.
Dec 13 04:15:18 compute-0 kernel: tapfda256aa-ac: entered promiscuous mode
Dec 13 04:15:18 compute-0 NetworkManager[48899]: <info>  [1765599318.8774] manager: (tapfda256aa-ac): new Tun device (/org/freedesktop/NetworkManager/Devices/77)
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.879 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 ovn_controller[145204]: 2025-12-13T04:15:18Z|00128|binding|INFO|Claiming lport fda256aa-ac14-4ec9-a507-3553417887b8 for this chassis.
Dec 13 04:15:18 compute-0 ovn_controller[145204]: 2025-12-13T04:15:18Z|00129|binding|INFO|fda256aa-ac14-4ec9-a507-3553417887b8: Claiming fa:16:3e:9f:1a:6e 10.100.0.7
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.883 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.887 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.895 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:1a:6e 10.100.0.7'], port_security=['fa:16:3e:9f:1a:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49ec6453-af58-4bf0-89f5-4faf5d3a92c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2b4bcbae-c530-4398-b94f-1e1a32150108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=fda256aa-ac14-4ec9-a507-3553417887b8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.896 154842 INFO neutron.agent.ovn.metadata.agent [-] Port fda256aa-ac14-4ec9-a507-3553417887b8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.898 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:15:18 compute-0 systemd-machined[206767]: New machine qemu-12-instance-0000000c.
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.911 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[35f55672-b45e-45ea-bf6e-d89d263a83e6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.913 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc553cd2-51 in ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.915 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc553cd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.915 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6a9227-5967-438e-952f-2d62710cb89b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.916 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ef9cf916-996d-4b1a-8266-14c0d0153905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.934 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[13237b11-7db2-4381-8c5d-d58795ee9f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:18 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000c.
Dec 13 04:15:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e224 do_prune osdmap full prune enabled
Dec 13 04:15:18 compute-0 ceph-mon[75071]: pgmap v1161: 305 pgs: 305 active+clean; 1.1 GiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 25 MiB/s wr, 80 op/s
Dec 13 04:15:18 compute-0 ceph-mon[75071]: osdmap e224: 3 total, 3 up, 3 in
Dec 13 04:15:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/603173673' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:18 compute-0 systemd-udevd[260436]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e225 e225: 3 total, 3 up, 3 in
Dec 13 04:15:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e225: 3 total, 3 up, 3 in
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.959 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 ovn_controller[145204]: 2025-12-13T04:15:18Z|00130|binding|INFO|Setting lport fda256aa-ac14-4ec9-a507-3553417887b8 ovn-installed in OVS
Dec 13 04:15:18 compute-0 NetworkManager[48899]: <info>  [1765599318.9654] device (tapfda256aa-ac): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:15:18 compute-0 ovn_controller[145204]: 2025-12-13T04:15:18Z|00131|binding|INFO|Setting lport fda256aa-ac14-4ec9-a507-3553417887b8 up in Southbound
Dec 13 04:15:18 compute-0 nova_compute[243704]: 2025-12-13 04:15:18.966 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.965 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bdf90d0a-3a30-42e4-bcb5-b07c555b7c45]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:18 compute-0 NetworkManager[48899]: <info>  [1765599318.9675] device (tapfda256aa-ac): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:15:18 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:18.997 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[60fa823f-454d-465e-b0fc-79bd4b45985c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 systemd-udevd[260438]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:19 compute-0 NetworkManager[48899]: <info>  [1765599319.0066] manager: (tapfc553cd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/78)
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.005 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a6265ff5-97b4-41a1-8978-61caa94d86b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.041 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[283569d7-4ed0-4e9d-ac74-542e2300b9af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.045 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[bedb0f0e-adfb-445c-a24d-226aafbcbe25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 NetworkManager[48899]: <info>  [1765599319.0673] device (tapfc553cd2-50): carrier: link connected
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.071 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[dce9428b-064f-485e-801f-e7e764f18393]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.092 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b1900122-1b56-4e6c-b9e9-cc63075e9420]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405774, 'reachable_time': 33450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 260466, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.109 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[970a7f2e-a71d-4e6a-b1d5-d95e563a6725]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:ae9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405774, 'tstamp': 405774}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 260467, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.140 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dcc9277d-3f35-41c7-b85f-a9547f640455]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405774, 'reachable_time': 33450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 260468, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.193 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7460d193-2339-4a93-bd3a-513970cdc5d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.194 243708 DEBUG nova.network.neutron [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updated VIF entry in instance network info cache for port fda256aa-ac14-4ec9-a507-3553417887b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.195 243708 DEBUG nova.network.neutron [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating instance_info_cache with network_info: [{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.207 243708 DEBUG oslo_concurrency.lockutils [req-45fa81d5-dff6-4fbf-b608-b5bf1d77e841 req-79b93305-167b-4a08-8ddd-0fb9b1fc60d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.274 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[766308ca-d1c4-4741-8172-4d7190cb0e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.277 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.277 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.278 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:19 compute-0 kernel: tapfc553cd2-50: entered promiscuous mode
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.332 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:19 compute-0 NetworkManager[48899]: <info>  [1765599319.3340] manager: (tapfc553cd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/79)
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.335 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:19 compute-0 ovn_controller[145204]: 2025-12-13T04:15:19Z|00132|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.337 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.352 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.354 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.355 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ecb75c74-b6f6-46d3-95c3-e024ac1eba6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.357 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:15:19 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:19.358 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'env', 'PROCESS_TAG=haproxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.358 243708 DEBUG nova.compute.manager [req-b0cf4dd0-c921-4086-b9c4-ff6520dbb154 req-4ed7ef9e-8510-48dc-82d0-dbc322cb1f84 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.359 243708 DEBUG oslo_concurrency.lockutils [req-b0cf4dd0-c921-4086-b9c4-ff6520dbb154 req-4ed7ef9e-8510-48dc-82d0-dbc322cb1f84 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.359 243708 DEBUG oslo_concurrency.lockutils [req-b0cf4dd0-c921-4086-b9c4-ff6520dbb154 req-4ed7ef9e-8510-48dc-82d0-dbc322cb1f84 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.360 243708 DEBUG oslo_concurrency.lockutils [req-b0cf4dd0-c921-4086-b9c4-ff6520dbb154 req-4ed7ef9e-8510-48dc-82d0-dbc322cb1f84 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.360 243708 DEBUG nova.compute.manager [req-b0cf4dd0-c921-4086-b9c4-ff6520dbb154 req-4ed7ef9e-8510-48dc-82d0-dbc322cb1f84 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Processing event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:15:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 41 MiB/s wr, 171 op/s
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.582 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.584 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599319.5834641, 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.586 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] VM Started (Lifecycle Event)
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.589 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.595 243708 INFO nova.virt.libvirt.driver [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Instance spawned successfully.
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.596 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.607 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.613 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.616 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.617 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.617 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.618 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.618 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.619 243708 DEBUG nova.virt.libvirt.driver [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.639 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.640 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599319.584166, 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.640 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] VM Paused (Lifecycle Event)
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.672 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.677 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599319.5866232, 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.677 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] VM Resumed (Lifecycle Event)
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.690 243708 INFO nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Took 5.44 seconds to spawn the instance on the hypervisor.
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.691 243708 DEBUG nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.700 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.703 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.720 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.754 243708 INFO nova.compute.manager [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Took 8.79 seconds to build instance.
Dec 13 04:15:19 compute-0 nova_compute[243704]: 2025-12-13 04:15:19.769 243708 DEBUG oslo_concurrency.lockutils [None req-a20063d8-b690-4ab7-b7fa-5d982e7e324f 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.870s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:19 compute-0 podman[260540]: 2025-12-13 04:15:19.815553444 +0000 UTC m=+0.090862949 container create a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:15:19 compute-0 podman[260540]: 2025-12-13 04:15:19.765528941 +0000 UTC m=+0.040838526 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:15:19 compute-0 systemd[1]: Started libpod-conmon-a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c.scope.
Dec 13 04:15:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3427579109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3427579109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45bf05a9c686e4d995e36e589a6db98885179e1ab8c41be0ce3796963ee118ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:19 compute-0 podman[260540]: 2025-12-13 04:15:19.915553199 +0000 UTC m=+0.190862714 container init a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:15:19 compute-0 podman[260540]: 2025-12-13 04:15:19.922061915 +0000 UTC m=+0.197371410 container start a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 04:15:19 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [NOTICE]   (260560) : New worker (260562) forked
Dec 13 04:15:19 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [NOTICE]   (260560) : Loading success.
Dec 13 04:15:19 compute-0 ceph-mon[75071]: osdmap e225: 3 total, 3 up, 3 in
Dec 13 04:15:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3427579109' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3427579109' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e225 do_prune osdmap full prune enabled
Dec 13 04:15:20 compute-0 ceph-mon[75071]: pgmap v1164: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 112 KiB/s rd, 41 MiB/s wr, 171 op/s
Dec 13 04:15:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e226 e226: 3 total, 3 up, 3 in
Dec 13 04:15:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e226: 3 total, 3 up, 3 in
Dec 13 04:15:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e226 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.446 243708 DEBUG nova.compute.manager [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.447 243708 DEBUG oslo_concurrency.lockutils [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.447 243708 DEBUG oslo_concurrency.lockutils [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.448 243708 DEBUG oslo_concurrency.lockutils [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.448 243708 DEBUG nova.compute.manager [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] No waiting events found dispatching network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:15:21 compute-0 nova_compute[243704]: 2025-12-13 04:15:21.448 243708 WARNING nova.compute.manager [req-81960a37-b2eb-44bc-824f-f057d4eef1fc req-8ed2cd40-dfef-4e4f-b29c-4ec79be5e1b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received unexpected event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 for instance with vm_state active and task_state None.
Dec 13 04:15:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 15 MiB/s wr, 91 op/s
Dec 13 04:15:21 compute-0 ceph-mon[75071]: osdmap e226: 3 total, 3 up, 3 in
Dec 13 04:15:21 compute-0 ceph-mon[75071]: pgmap v1166: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 15 MiB/s wr, 91 op/s
Dec 13 04:15:22 compute-0 nova_compute[243704]: 2025-12-13 04:15:22.503 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:22 compute-0 nova_compute[243704]: 2025-12-13 04:15:22.642 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:22 compute-0 NetworkManager[48899]: <info>  [1765599322.6525] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Dec 13 04:15:22 compute-0 NetworkManager[48899]: <info>  [1765599322.6537] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Dec 13 04:15:22 compute-0 nova_compute[243704]: 2025-12-13 04:15:22.798 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:22 compute-0 ovn_controller[145204]: 2025-12-13T04:15:22Z|00133|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:15:22 compute-0 nova_compute[243704]: 2025-12-13 04:15:22.811 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.201 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 13 MiB/s wr, 81 op/s
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.557 243708 DEBUG nova.compute.manager [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-changed-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.558 243708 DEBUG nova.compute.manager [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Refreshing instance network info cache due to event network-changed-fda256aa-ac14-4ec9-a507-3553417887b8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.558 243708 DEBUG oslo_concurrency.lockutils [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.558 243708 DEBUG oslo_concurrency.lockutils [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:23 compute-0 nova_compute[243704]: 2025-12-13 04:15:23.559 243708 DEBUG nova.network.neutron [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Refreshing network info cache for port fda256aa-ac14-4ec9-a507-3553417887b8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:15:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/835724773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:24 compute-0 ovn_controller[145204]: 2025-12-13T04:15:24Z|00134|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:15:24 compute-0 nova_compute[243704]: 2025-12-13 04:15:24.284 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e226 do_prune osdmap full prune enabled
Dec 13 04:15:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e227 e227: 3 total, 3 up, 3 in
Dec 13 04:15:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e227: 3 total, 3 up, 3 in
Dec 13 04:15:24 compute-0 ceph-mon[75071]: pgmap v1167: 305 pgs: 4 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 299 active+clean; 878 MiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 13 MiB/s wr, 81 op/s
Dec 13 04:15:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/835724773' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:25 compute-0 nova_compute[243704]: 2025-12-13 04:15:25.051 243708 DEBUG nova.network.neutron [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updated VIF entry in instance network info cache for port fda256aa-ac14-4ec9-a507-3553417887b8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:15:25 compute-0 nova_compute[243704]: 2025-12-13 04:15:25.052 243708 DEBUG nova.network.neutron [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating instance_info_cache with network_info: [{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:25 compute-0 nova_compute[243704]: 2025-12-13 04:15:25.071 243708 DEBUG oslo_concurrency.lockutils [req-50493a62-c479-4696-8223-4740e6155ba3 req-98d94ec1-2300-4fad-bb62-b1dc26f54fbe 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 255 op/s
Dec 13 04:15:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e227 do_prune osdmap full prune enabled
Dec 13 04:15:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e228 e228: 3 total, 3 up, 3 in
Dec 13 04:15:25 compute-0 ceph-mon[75071]: osdmap e227: 3 total, 3 up, 3 in
Dec 13 04:15:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e228: 3 total, 3 up, 3 in
Dec 13 04:15:25 compute-0 podman[260573]: 2025-12-13 04:15:25.996357808 +0000 UTC m=+0.129017670 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 04:15:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e228 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e228 do_prune osdmap full prune enabled
Dec 13 04:15:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e229 e229: 3 total, 3 up, 3 in
Dec 13 04:15:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e229: 3 total, 3 up, 3 in
Dec 13 04:15:26 compute-0 ceph-mon[75071]: pgmap v1169: 305 pgs: 305 active+clean; 134 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 255 op/s
Dec 13 04:15:26 compute-0 ceph-mon[75071]: osdmap e228: 3 total, 3 up, 3 in
Dec 13 04:15:26 compute-0 ceph-mon[75071]: osdmap e229: 3 total, 3 up, 3 in
Dec 13 04:15:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 134 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 KiB/s wr, 264 op/s
Dec 13 04:15:27 compute-0 nova_compute[243704]: 2025-12-13 04:15:27.505 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3456616641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3456616641' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:28 compute-0 nova_compute[243704]: 2025-12-13 04:15:28.247 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e229 do_prune osdmap full prune enabled
Dec 13 04:15:28 compute-0 ceph-mon[75071]: pgmap v1172: 305 pgs: 305 active+clean; 134 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 5.0 KiB/s wr, 264 op/s
Dec 13 04:15:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e230 e230: 3 total, 3 up, 3 in
Dec 13 04:15:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e230: 3 total, 3 up, 3 in
Dec 13 04:15:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 KiB/s wr, 56 op/s
Dec 13 04:15:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e230 do_prune osdmap full prune enabled
Dec 13 04:15:29 compute-0 ceph-mon[75071]: osdmap e230: 3 total, 3 up, 3 in
Dec 13 04:15:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e231 e231: 3 total, 3 up, 3 in
Dec 13 04:15:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e231: 3 total, 3 up, 3 in
Dec 13 04:15:30 compute-0 ceph-mon[75071]: pgmap v1174: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 3.6 KiB/s wr, 56 op/s
Dec 13 04:15:30 compute-0 ceph-mon[75071]: osdmap e231: 3 total, 3 up, 3 in
Dec 13 04:15:30 compute-0 nova_compute[243704]: 2025-12-13 04:15:30.850 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:30 compute-0 sudo[260600]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:30 compute-0 sudo[260600]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:30 compute-0 sudo[260600]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:30 compute-0 sudo[260625]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 04:15:30 compute-0 sudo[260625]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e231 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:31 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1689388866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:31 compute-0 sudo[260625]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:15:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:15:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:31 compute-0 sudo[260669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:31 compute-0 sudo[260669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:31 compute-0 sudo[260669]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:31 compute-0 sudo[260694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:15:31 compute-0 sudo[260694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 47 op/s
Dec 13 04:15:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1689388866' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:31 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:32 compute-0 sudo[260694]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:32 compute-0 sudo[260749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:32 compute-0 sudo[260749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:32 compute-0 sudo[260749]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:32 compute-0 sudo[260774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- inventory --format=json-pretty --filter-for-batch
Dec 13 04:15:32 compute-0 sudo[260774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e231 do_prune osdmap full prune enabled
Dec 13 04:15:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e232 e232: 3 total, 3 up, 3 in
Dec 13 04:15:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e232: 3 total, 3 up, 3 in
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.450374922 +0000 UTC m=+0.044640068 container create 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:15:32 compute-0 systemd[1]: Started libpod-conmon-7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc.scope.
Dec 13 04:15:32 compute-0 nova_compute[243704]: 2025-12-13 04:15:32.506 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.432897748 +0000 UTC m=+0.027162914 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.543073819 +0000 UTC m=+0.137338985 container init 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.550567001 +0000 UTC m=+0.144832147 container start 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.553676866 +0000 UTC m=+0.147942012 container attach 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:15:32 compute-0 boring_hoover[260828]: 167 167
Dec 13 04:15:32 compute-0 systemd[1]: libpod-7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc.scope: Deactivated successfully.
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.557259653 +0000 UTC m=+0.151524799 container died 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ec4859e641a4af4edd2fe9e607d096cc793242caea53c62a26748a9028d96dc-merged.mount: Deactivated successfully.
Dec 13 04:15:32 compute-0 podman[260812]: 2025-12-13 04:15:32.606950137 +0000 UTC m=+0.201215283 container remove 7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hoover, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:32 compute-0 systemd[1]: libpod-conmon-7582aee2518217396fde40e1ffdb203ef12dbedcddc7bc326a61f20a8b764ccc.scope: Deactivated successfully.
Dec 13 04:15:32 compute-0 ceph-mon[75071]: pgmap v1176: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 47 op/s
Dec 13 04:15:32 compute-0 ceph-mon[75071]: osdmap e232: 3 total, 3 up, 3 in
Dec 13 04:15:32 compute-0 podman[260849]: 2025-12-13 04:15:32.818291713 +0000 UTC m=+0.047461325 container create de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:15:32 compute-0 systemd[1]: Started libpod-conmon-de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708.scope.
Dec 13 04:15:32 compute-0 podman[260849]: 2025-12-13 04:15:32.794791948 +0000 UTC m=+0.023961580 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006533207b129fc05186a83a7d949b5c403397dfa0e7d8c34a4590c851f666a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006533207b129fc05186a83a7d949b5c403397dfa0e7d8c34a4590c851f666a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006533207b129fc05186a83a7d949b5c403397dfa0e7d8c34a4590c851f666a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/006533207b129fc05186a83a7d949b5c403397dfa0e7d8c34a4590c851f666a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:32 compute-0 podman[260849]: 2025-12-13 04:15:32.921224667 +0000 UTC m=+0.150394299 container init de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:32 compute-0 podman[260849]: 2025-12-13 04:15:32.92761818 +0000 UTC m=+0.156787792 container start de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:15:32 compute-0 podman[260849]: 2025-12-13 04:15:32.931188676 +0000 UTC m=+0.160358288 container attach de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:15:33 compute-0 nova_compute[243704]: 2025-12-13 04:15:33.250 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e232 do_prune osdmap full prune enabled
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e233 e233: 3 total, 3 up, 3 in
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e233: 3 total, 3 up, 3 in
Dec 13 04:15:33 compute-0 happy_dhawan[260865]: [
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:     {
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "available": false,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "being_replaced": false,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "ceph_device_lvm": false,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "device_id": "QEMU_DVD-ROM_QM00001",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "lsm_data": {},
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "lvs": [],
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "path": "/dev/sr0",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "rejected_reasons": [
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "Insufficient space (<5GB)",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "Has a FileSystem"
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         ],
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         "sys_api": {
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "actuators": null,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "device_nodes": [
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:                 "sr0"
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             ],
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "devname": "sr0",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "human_readable_size": "482.00 KB",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "id_bus": "ata",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "model": "QEMU DVD-ROM",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "nr_requests": "2",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "parent": "/dev/sr0",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "partitions": {},
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "path": "/dev/sr0",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "removable": "1",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "rev": "2.5+",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "ro": "0",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "rotational": "1",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "sas_address": "",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "sas_device_handle": "",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "scheduler_mode": "mq-deadline",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "sectors": 0,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "sectorsize": "2048",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "size": 493568.0,
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "support_discard": "2048",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "type": "disk",
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:             "vendor": "QEMU"
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:         }
Dec 13 04:15:33 compute-0 happy_dhawan[260865]:     }
Dec 13 04:15:33 compute-0 happy_dhawan[260865]: ]
Dec 13 04:15:33 compute-0 systemd[1]: libpod-de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708.scope: Deactivated successfully.
Dec 13 04:15:33 compute-0 podman[260849]: 2025-12-13 04:15:33.48072372 +0000 UTC m=+0.709893332 container died de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 51 op/s
Dec 13 04:15:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-006533207b129fc05186a83a7d949b5c403397dfa0e7d8c34a4590c851f666a1-merged.mount: Deactivated successfully.
Dec 13 04:15:33 compute-0 podman[260849]: 2025-12-13 04:15:33.525621195 +0000 UTC m=+0.754790817 container remove de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_dhawan, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:15:33 compute-0 systemd[1]: libpod-conmon-de7ccd425985e119b7fdd973f10071b9edfc6ad8cbf27eeea739f6806bcf4708.scope: Deactivated successfully.
Dec 13 04:15:33 compute-0 ovn_controller[145204]: 2025-12-13T04:15:33Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:1a:6e 10.100.0.7
Dec 13 04:15:33 compute-0 ovn_controller[145204]: 2025-12-13T04:15:33Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:1a:6e 10.100.0.7
Dec 13 04:15:33 compute-0 sudo[260774]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:15:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:15:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:33 compute-0 sudo[261720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:33 compute-0 sudo[261720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:33 compute-0 sudo[261720]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:33 compute-0 sudo[261745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:15:33 compute-0 sudo[261745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:33 compute-0 nova_compute[243704]: 2025-12-13 04:15:33.935 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.104993004 +0000 UTC m=+0.047306910 container create 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:15:34 compute-0 systemd[1]: Started libpod-conmon-574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9.scope.
Dec 13 04:15:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.174785482 +0000 UTC m=+0.117099438 container init 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.085377464 +0000 UTC m=+0.027691390 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.183708923 +0000 UTC m=+0.126022829 container start 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.187189288 +0000 UTC m=+0.129503224 container attach 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:15:34 compute-0 gallant_almeida[261798]: 167 167
Dec 13 04:15:34 compute-0 systemd[1]: libpod-574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9.scope: Deactivated successfully.
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.190926809 +0000 UTC m=+0.133240715 container died 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1caf5a0c1394509222ef826181e8271110ceae072bd99043549933cf065fa17-merged.mount: Deactivated successfully.
Dec 13 04:15:34 compute-0 podman[261782]: 2025-12-13 04:15:34.225336919 +0000 UTC m=+0.167650825 container remove 574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:15:34 compute-0 systemd[1]: libpod-conmon-574a110b446c5a9ab0f9018009e3f60368dd36b137f91280988dfee477673fc9.scope: Deactivated successfully.
Dec 13 04:15:34 compute-0 ceph-mon[75071]: osdmap e233: 3 total, 3 up, 3 in
Dec 13 04:15:34 compute-0 ceph-mon[75071]: pgmap v1179: 305 pgs: 305 active+clean; 134 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.5 KiB/s wr, 51 op/s
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:15:34 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:15:34 compute-0 podman[261822]: 2025-12-13 04:15:34.42351055 +0000 UTC m=+0.046824268 container create 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:34 compute-0 systemd[1]: Started libpod-conmon-1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807.scope.
Dec 13 04:15:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:34 compute-0 podman[261822]: 2025-12-13 04:15:34.404008762 +0000 UTC m=+0.027322490 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:34 compute-0 podman[261822]: 2025-12-13 04:15:34.51153478 +0000 UTC m=+0.134848538 container init 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:15:34 compute-0 podman[261822]: 2025-12-13 04:15:34.519374302 +0000 UTC m=+0.142688010 container start 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:15:34 compute-0 podman[261822]: 2025-12-13 04:15:34.523023932 +0000 UTC m=+0.146337680 container attach 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Dec 13 04:15:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930766610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:34 compute-0 elastic_chatterjee[261839]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:15:34 compute-0 elastic_chatterjee[261839]: --> All data devices are unavailable
Dec 13 04:15:35 compute-0 systemd[1]: libpod-1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807.scope: Deactivated successfully.
Dec 13 04:15:35 compute-0 podman[261822]: 2025-12-13 04:15:35.017592628 +0000 UTC m=+0.640906336 container died 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-559314f23bc88d4716951e19457d39a92b8332ad95b6d1996d1a6a8a6f4b48ee-merged.mount: Deactivated successfully.
Dec 13 04:15:35 compute-0 podman[261822]: 2025-12-13 04:15:35.069102982 +0000 UTC m=+0.692416700 container remove 1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:15:35 compute-0 systemd[1]: libpod-conmon-1435d8ea0abeeb4633fccfa179d92b5b43fb7a903a2e09e5e7cf225d81c0a807.scope: Deactivated successfully.
Dec 13 04:15:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:35.090 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:35.093 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:35.094 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:35 compute-0 sudo[261745]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:35 compute-0 sudo[261871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:35 compute-0 sudo[261871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:35 compute-0 sudo[261871]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:35 compute-0 sudo[261896]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:15:35 compute-0 sudo[261896]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e233 do_prune osdmap full prune enabled
Dec 13 04:15:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/930766610' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e234 e234: 3 total, 3 up, 3 in
Dec 13 04:15:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e234: 3 total, 3 up, 3 in
Dec 13 04:15:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 858 KiB/s rd, 4.5 MiB/s wr, 194 op/s
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.586183457 +0000 UTC m=+0.046357055 container create 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:15:35 compute-0 systemd[1]: Started libpod-conmon-4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5.scope.
Dec 13 04:15:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.568957461 +0000 UTC m=+0.029131079 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.686599473 +0000 UTC m=+0.146773081 container init 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.693594252 +0000 UTC m=+0.153767850 container start 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:15:35 compute-0 vigilant_euler[261949]: 167 167
Dec 13 04:15:35 compute-0 systemd[1]: libpod-4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5.scope: Deactivated successfully.
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.731751504 +0000 UTC m=+0.191925202 container attach 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.733013449 +0000 UTC m=+0.193187057 container died 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:15:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-467a86a26b222f4dda228cde63e5285fb4227727cec665c3a43528bb63f93cf9-merged.mount: Deactivated successfully.
Dec 13 04:15:35 compute-0 podman[261933]: 2025-12-13 04:15:35.767140972 +0000 UTC m=+0.227314560 container remove 4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_euler, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:15:35 compute-0 systemd[1]: libpod-conmon-4649dfc70da7dd43cac74426d6d9ac17b1db169528cef5a13a3911bad6e75dd5.scope: Deactivated successfully.
Dec 13 04:15:35 compute-0 podman[261975]: 2025-12-13 04:15:35.953917233 +0000 UTC m=+0.039180691 container create f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:15:35 compute-0 systemd[1]: Started libpod-conmon-f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d.scope.
Dec 13 04:15:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbc1bfbc35b4cb50bf1584e9482fc214ccdd8c29a735d3f8c7f957446dc7ffc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbc1bfbc35b4cb50bf1584e9482fc214ccdd8c29a735d3f8c7f957446dc7ffc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbc1bfbc35b4cb50bf1584e9482fc214ccdd8c29a735d3f8c7f957446dc7ffc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1dbc1bfbc35b4cb50bf1584e9482fc214ccdd8c29a735d3f8c7f957446dc7ffc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:35.935958697 +0000 UTC m=+0.021222165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:36.035429377 +0000 UTC m=+0.120692845 container init f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:36.040647749 +0000 UTC m=+0.125911187 container start f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:36.043710341 +0000 UTC m=+0.128973789 container attach f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:15:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e234 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]: {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     "0": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "devices": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "/dev/loop3"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             ],
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_name": "ceph_lv0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_size": "21470642176",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "name": "ceph_lv0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "tags": {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_name": "ceph",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.crush_device_class": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.encrypted": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.objectstore": "bluestore",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_id": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.vdo": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.with_tpm": "0"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             },
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "vg_name": "ceph_vg0"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         }
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     ],
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     "1": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "devices": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "/dev/loop4"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             ],
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_name": "ceph_lv1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_size": "21470642176",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "name": "ceph_lv1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "tags": {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_name": "ceph",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.crush_device_class": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.encrypted": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.objectstore": "bluestore",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_id": "1",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.vdo": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.with_tpm": "0"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             },
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "vg_name": "ceph_vg1"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         }
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     ],
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     "2": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "devices": [
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "/dev/loop5"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             ],
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_name": "ceph_lv2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_size": "21470642176",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "name": "ceph_lv2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "tags": {
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.cluster_name": "ceph",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.crush_device_class": "",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.encrypted": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.objectstore": "bluestore",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osd_id": "2",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.vdo": "0",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:                 "ceph.with_tpm": "0"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             },
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "type": "block",
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:             "vg_name": "ceph_vg2"
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:         }
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]:     ]
Dec 13 04:15:36 compute-0 jolly_wilbur[261991]: }
Dec 13 04:15:36 compute-0 systemd[1]: libpod-f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d.scope: Deactivated successfully.
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:36.323027906 +0000 UTC m=+0.408291364 container died f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-1dbc1bfbc35b4cb50bf1584e9482fc214ccdd8c29a735d3f8c7f957446dc7ffc-merged.mount: Deactivated successfully.
Dec 13 04:15:36 compute-0 podman[261975]: 2025-12-13 04:15:36.364278692 +0000 UTC m=+0.449542150 container remove f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:15:36 compute-0 systemd[1]: libpod-conmon-f7c6d7681db1f3656f2f3ae1a0be4c0a5c083b55dbf586c5a53c8b7b99476a9d.scope: Deactivated successfully.
Dec 13 04:15:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Dec 13 04:15:36 compute-0 ceph-mon[75071]: osdmap e234: 3 total, 3 up, 3 in
Dec 13 04:15:36 compute-0 ceph-mon[75071]: pgmap v1181: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 858 KiB/s rd, 4.5 MiB/s wr, 194 op/s
Dec 13 04:15:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Dec 13 04:15:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Dec 13 04:15:36 compute-0 sudo[261896]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:36 compute-0 sudo[262013]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:15:36 compute-0 sudo[262013]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:36 compute-0 sudo[262013]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:36 compute-0 sudo[262038]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:15:36 compute-0 sudo[262038]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.818436666 +0000 UTC m=+0.041743830 container create 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:15:36 compute-0 systemd[1]: Started libpod-conmon-3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8.scope.
Dec 13 04:15:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.800982424 +0000 UTC m=+0.024289608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.895197842 +0000 UTC m=+0.118505026 container init 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.903149977 +0000 UTC m=+0.126457141 container start 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.906853208 +0000 UTC m=+0.130160422 container attach 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:15:36 compute-0 vigilant_nash[262092]: 167 167
Dec 13 04:15:36 compute-0 systemd[1]: libpod-3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8.scope: Deactivated successfully.
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.910201428 +0000 UTC m=+0.133508592 container died 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:15:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-bb0cfc2d5aa2ba188d5d2609954d69143d4e0bdedab80818256c8ee2eb5a26d8-merged.mount: Deactivated successfully.
Dec 13 04:15:36 compute-0 podman[262075]: 2025-12-13 04:15:36.945515613 +0000 UTC m=+0.168822777 container remove 3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:15:36 compute-0 systemd[1]: libpod-conmon-3ff3a7e6a8b3f5741b1df99baa50d47d4b1e997704eef730a97950d1af5b9ef8.scope: Deactivated successfully.
Dec 13 04:15:37 compute-0 podman[262104]: 2025-12-13 04:15:37.0252462 +0000 UTC m=+0.062285276 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent)
Dec 13 04:15:37 compute-0 podman[262135]: 2025-12-13 04:15:37.110792573 +0000 UTC m=+0.037318380 container create bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:15:37 compute-0 systemd[1]: Started libpod-conmon-bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb.scope.
Dec 13 04:15:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4687a8e07c7e5d8ef443f54462c4441c748e7b7821202f77c3c8070bff758f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4687a8e07c7e5d8ef443f54462c4441c748e7b7821202f77c3c8070bff758f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4687a8e07c7e5d8ef443f54462c4441c748e7b7821202f77c3c8070bff758f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4687a8e07c7e5d8ef443f54462c4441c748e7b7821202f77c3c8070bff758f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:37 compute-0 podman[262135]: 2025-12-13 04:15:37.189883623 +0000 UTC m=+0.116409440 container init bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:37 compute-0 podman[262135]: 2025-12-13 04:15:37.094306717 +0000 UTC m=+0.020832524 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:15:37 compute-0 podman[262135]: 2025-12-13 04:15:37.201632661 +0000 UTC m=+0.128158458 container start bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:15:37 compute-0 podman[262135]: 2025-12-13 04:15:37.204924269 +0000 UTC m=+0.131450086 container attach bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:15:37 compute-0 ceph-mon[75071]: osdmap e235: 3 total, 3 up, 3 in
Dec 13 04:15:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 959 KiB/s rd, 5.0 MiB/s wr, 216 op/s
Dec 13 04:15:37 compute-0 nova_compute[243704]: 2025-12-13 04:15:37.507 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:37 compute-0 lvm[262232]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:15:37 compute-0 lvm[262232]: VG ceph_vg1 finished
Dec 13 04:15:37 compute-0 lvm[262231]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:15:37 compute-0 lvm[262231]: VG ceph_vg0 finished
Dec 13 04:15:37 compute-0 lvm[262234]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:15:37 compute-0 lvm[262234]: VG ceph_vg2 finished
Dec 13 04:15:38 compute-0 lucid_hoover[262152]: {}
Dec 13 04:15:38 compute-0 systemd[1]: libpod-bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb.scope: Deactivated successfully.
Dec 13 04:15:38 compute-0 systemd[1]: libpod-bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb.scope: Consumed 1.402s CPU time.
Dec 13 04:15:38 compute-0 podman[262237]: 2025-12-13 04:15:38.108687544 +0000 UTC m=+0.027790343 container died bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:15:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb4687a8e07c7e5d8ef443f54462c4441c748e7b7821202f77c3c8070bff758f-merged.mount: Deactivated successfully.
Dec 13 04:15:38 compute-0 podman[262237]: 2025-12-13 04:15:38.150595408 +0000 UTC m=+0.069698187 container remove bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_hoover, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:15:38 compute-0 systemd[1]: libpod-conmon-bdc2a84bb14198da971a954209912f09cc4a5c490ded7e7a7079a7d5d6476ddb.scope: Deactivated successfully.
Dec 13 04:15:38 compute-0 sudo[262038]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:15:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:15:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:38 compute-0 nova_compute[243704]: 2025-12-13 04:15:38.252 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:38 compute-0 sudo[262251]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:15:38 compute-0 sudo[262251]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:15:38 compute-0 sudo[262251]: pam_unix(sudo:session): session closed for user root
Dec 13 04:15:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Dec 13 04:15:38 compute-0 ceph-mon[75071]: pgmap v1183: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 959 KiB/s rd, 5.0 MiB/s wr, 216 op/s
Dec 13 04:15:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:38 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:15:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Dec 13 04:15:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Dec 13 04:15:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Dec 13 04:15:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Dec 13 04:15:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Dec 13 04:15:39 compute-0 ceph-mon[75071]: osdmap e236: 3 total, 3 up, 3 in
Dec 13 04:15:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 39 KiB/s wr, 109 op/s
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.728 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.729 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.762 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.848 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.850 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.857 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:15:39 compute-0 nova_compute[243704]: 2025-12-13 04:15:39.858 243708 INFO nova.compute.claims [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.106 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Dec 13 04:15:40 compute-0 ceph-mon[75071]: osdmap e237: 3 total, 3 up, 3 in
Dec 13 04:15:40 compute-0 ceph-mon[75071]: pgmap v1186: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 80 KiB/s rd, 39 KiB/s wr, 109 op/s
Dec 13 04:15:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Dec 13 04:15:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Dec 13 04:15:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:15:40
Dec 13 04:15:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:15:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:15:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'images', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', '.mgr', 'backups', '.rgw.root', 'volumes', 'default.rgw.log']
Dec 13 04:15:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:15:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:15:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1734219022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.676 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.687 243708 DEBUG nova.compute.provider_tree [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.703 243708 DEBUG nova.scheduler.client.report [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.725 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.875s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.726 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.784 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.784 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.801 243708 INFO nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.814 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.889 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.891 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.892 243708 INFO nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Creating image(s)
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.914 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.939 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.964 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.970 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:40 compute-0 nova_compute[243704]: 2025-12-13 04:15:40.993 243708 DEBUG nova.policy [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11e9a1a42b4b4d679693155d71445247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.036 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.037 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.038 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.039 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.060 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.064 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Dec 13 04:15:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Dec 13 04:15:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.332 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.268s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.395 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] resizing rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:15:41 compute-0 ceph-mon[75071]: osdmap e238: 3 total, 3 up, 3 in
Dec 13 04:15:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1734219022' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:15:41 compute-0 ceph-mon[75071]: osdmap e239: 3 total, 3 up, 3 in
Dec 13 04:15:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 40 KiB/s wr, 111 op/s
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.490 243708 DEBUG nova.objects.instance [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'migration_context' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.503 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.503 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Ensure instance console log exists: /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.504 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.504 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:41 compute-0 nova_compute[243704]: 2025-12-13 04:15:41.505 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:42 compute-0 nova_compute[243704]: 2025-12-13 04:15:42.272 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Successfully created port: f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:15:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Dec 13 04:15:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Dec 13 04:15:42 compute-0 ceph-mon[75071]: pgmap v1189: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 40 KiB/s wr, 111 op/s
Dec 13 04:15:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Dec 13 04:15:42 compute-0 nova_compute[243704]: 2025-12-13 04:15:42.510 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:15:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.026 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Successfully updated port: f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.041 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.041 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquired lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.042 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.165 243708 DEBUG nova.compute.manager [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-changed-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.165 243708 DEBUG nova.compute.manager [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Refreshing instance network info cache due to event network-changed-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.166 243708 DEBUG oslo_concurrency.lockutils [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.215 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:15:43 compute-0 nova_compute[243704]: 2025-12-13 04:15:43.254 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:43 compute-0 ceph-mon[75071]: osdmap e240: 3 total, 3 up, 3 in
Dec 13 04:15:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.059 243708 DEBUG nova.network.neutron [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updating instance_info_cache with network_info: [{"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.085 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Releasing lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.086 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Instance network_info: |[{"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.086 243708 DEBUG oslo_concurrency.lockutils [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.086 243708 DEBUG nova.network.neutron [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Refreshing network info cache for port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.089 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Start _get_guest_xml network_info=[{"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.095 243708 WARNING nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.104 243708 DEBUG nova.virt.libvirt.host [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.105 243708 DEBUG nova.virt.libvirt.host [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.109 243708 DEBUG nova.virt.libvirt.host [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.109 243708 DEBUG nova.virt.libvirt.host [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.110 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.110 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.111 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.111 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.112 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.112 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.112 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.113 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.113 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.114 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.114 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.114 243708 DEBUG nova.virt.hardware [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.119 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Dec 13 04:15:44 compute-0 ceph-mon[75071]: pgmap v1191: 305 pgs: 305 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail
Dec 13 04:15:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Dec 13 04:15:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Dec 13 04:15:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1581269846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.655 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.683 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:44 compute-0 nova_compute[243704]: 2025-12-13 04:15:44.688 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/317570580' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.207 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.210 243708 DEBUG nova.virt.libvirt.vif [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1686365483',display_name='tempest-VolumesBackupsTest-instance-1686365483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1686365483',id=13,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcxCnDKalj3ZCx/r34Y9K6dnne776++fIGdwWplVqohQ3I/DHmtuoRJVigp3qXQGgRVDeVYbjo/YmXKC5Vi6CRSLI5U6WXwqiLwzp0VRz3IkODIcMIPRXgl6Zvzk6LmZA==',key_name='tempest-keypair-472346263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-9nlko5fq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:15:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=2aaef3c8-05f3-441e-b2ac-969ccd8305e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.210 243708 DEBUG nova.network.os_vif_util [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.211 243708 DEBUG nova.network.os_vif_util [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.213 243708 DEBUG nova.objects.instance [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'pci_devices' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.226 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <uuid>2aaef3c8-05f3-441e-b2ac-969ccd8305e3</uuid>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <name>instance-0000000d</name>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesBackupsTest-instance-1686365483</nova:name>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:15:44</nova:creationTime>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:user uuid="11e9a1a42b4b4d679693155d71445247">tempest-VolumesBackupsTest-951676606-project-member</nova:user>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:project uuid="f5e5c975dd8b4a088c217b330c95ba7b">tempest-VolumesBackupsTest-951676606</nova:project>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <nova:port uuid="f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5">
Dec 13 04:15:45 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <system>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="serial">2aaef3c8-05f3-441e-b2ac-969ccd8305e3</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="uuid">2aaef3c8-05f3-441e-b2ac-969ccd8305e3</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </system>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <os>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </os>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <features>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </features>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk">
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </source>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config">
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </source>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:15:45 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:ab:02:82"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <target dev="tapf9a6c5d4-c4"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/console.log" append="off"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <video>
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </video>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:15:45 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:15:45 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:15:45 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:15:45 compute-0 nova_compute[243704]: </domain>
Dec 13 04:15:45 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.228 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Preparing to wait for external event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.228 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.229 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.229 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.230 243708 DEBUG nova.virt.libvirt.vif [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1686365483',display_name='tempest-VolumesBackupsTest-instance-1686365483',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1686365483',id=13,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcxCnDKalj3ZCx/r34Y9K6dnne776++fIGdwWplVqohQ3I/DHmtuoRJVigp3qXQGgRVDeVYbjo/YmXKC5Vi6CRSLI5U6WXwqiLwzp0VRz3IkODIcMIPRXgl6Zvzk6LmZA==',key_name='tempest-keypair-472346263',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-9nlko5fq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:15:40Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=2aaef3c8-05f3-441e-b2ac-969ccd8305e3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.231 243708 DEBUG nova.network.os_vif_util [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.232 243708 DEBUG nova.network.os_vif_util [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.233 243708 DEBUG os_vif [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.234 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.236 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.236 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.242 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.242 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf9a6c5d4-c4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.242 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf9a6c5d4-c4, col_values=(('external_ids', {'iface-id': 'f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ab:02:82', 'vm-uuid': '2aaef3c8-05f3-441e-b2ac-969ccd8305e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.244 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:45 compute-0 NetworkManager[48899]: <info>  [1765599345.2452] manager: (tapf9a6c5d4-c4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.247 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.251 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.252 243708 INFO os_vif [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4')
Dec 13 04:15:45 compute-0 podman[262530]: 2025-12-13 04:15:45.360130716 +0000 UTC m=+0.065431120 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.371 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.372 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.372 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No VIF found with MAC fa:16:3e:ab:02:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.372 243708 INFO nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Using config drive
Dec 13 04:15:45 compute-0 nova_compute[243704]: 2025-12-13 04:15:45.393 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 4.2 MiB/s wr, 175 op/s
Dec 13 04:15:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Dec 13 04:15:45 compute-0 ceph-mon[75071]: osdmap e241: 3 total, 3 up, 3 in
Dec 13 04:15:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1581269846' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/317570580' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Dec 13 04:15:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Dec 13 04:15:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421128761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2421128761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Dec 13 04:15:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Dec 13 04:15:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.274 243708 INFO nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Creating config drive at /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.283 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5xiao2y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.315 243708 DEBUG nova.network.neutron [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updated VIF entry in instance network info cache for port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.316 243708 DEBUG nova.network.neutron [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updating instance_info_cache with network_info: [{"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.329 243708 DEBUG oslo_concurrency.lockutils [req-13ed614d-2645-4dee-89d3-96cbcc8ffbf3 req-9546c005-5481-443a-9d65-0b445f3bcb2f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.419 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpu5xiao2y" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.454 243708 DEBUG nova.storage.rbd_utils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.458 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:15:46 compute-0 ceph-mon[75071]: pgmap v1193: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 4.2 MiB/s wr, 175 op/s
Dec 13 04:15:46 compute-0 ceph-mon[75071]: osdmap e242: 3 total, 3 up, 3 in
Dec 13 04:15:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2421128761' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2421128761' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:46 compute-0 ceph-mon[75071]: osdmap e243: 3 total, 3 up, 3 in
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.738 243708 DEBUG oslo_concurrency.processutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config 2aaef3c8-05f3-441e-b2ac-969ccd8305e3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.740 243708 INFO nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Deleting local config drive /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3/disk.config because it was imported into RBD.
Dec 13 04:15:46 compute-0 NetworkManager[48899]: <info>  [1765599346.8035] manager: (tapf9a6c5d4-c4): new Tun device (/org/freedesktop/NetworkManager/Devices/83)
Dec 13 04:15:46 compute-0 kernel: tapf9a6c5d4-c4: entered promiscuous mode
Dec 13 04:15:46 compute-0 ovn_controller[145204]: 2025-12-13T04:15:46Z|00135|binding|INFO|Claiming lport f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 for this chassis.
Dec 13 04:15:46 compute-0 ovn_controller[145204]: 2025-12-13T04:15:46Z|00136|binding|INFO|f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5: Claiming fa:16:3e:ab:02:82 10.100.0.10
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.808 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:46 compute-0 ovn_controller[145204]: 2025-12-13T04:15:46Z|00137|binding|INFO|Setting lport f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 ovn-installed in OVS
Dec 13 04:15:46 compute-0 nova_compute[243704]: 2025-12-13 04:15:46.829 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:46 compute-0 systemd-udevd[262622]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:46 compute-0 systemd-machined[206767]: New machine qemu-13-instance-0000000d.
Dec 13 04:15:46 compute-0 NetworkManager[48899]: <info>  [1765599346.8626] device (tapf9a6c5d4-c4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:15:46 compute-0 NetworkManager[48899]: <info>  [1765599346.8634] device (tapf9a6c5d4-c4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:15:46 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000d.
Dec 13 04:15:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Dec 13 04:15:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Dec 13 04:15:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Dec 13 04:15:47 compute-0 ovn_controller[145204]: 2025-12-13T04:15:47Z|00138|binding|INFO|Setting lport f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 up in Southbound
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.242 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:02:82 10.100.0.10'], port_security=['fa:16:3e:ab:02:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2aaef3c8-05f3-441e-b2ac-969ccd8305e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b33a6e6b-398f-4a16-8a3e-aaf31f2da471', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8076cdc-415f-401f-a0fe-b3be303ae9cf, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.245 154842 INFO neutron.agent.ovn.metadata.agent [-] Port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 in datapath bfdc82ee-37dc-4f9b-b711-c6c9f87b443a bound to our chassis
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.247 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.260 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b2da1f1d-21ed-4039-8039-db42559ec31d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.261 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfdc82ee-31 in ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.265 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfdc82ee-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.265 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[415a31e0-dd91-4788-b3ad-97e2e85b8daf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.267 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1fc21ec9-4b24-49e1-8847-5ebea1a6cc07]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.278 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[247aa7c7-6b65-416f-b29a-b71e068369e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.309 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[40a78c2e-6995-4acb-95b6-8b437071023a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.354 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2ef9f3-f428-406e-9b5f-ff1e9897e4ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 systemd-udevd[262624]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.362 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5e8fbce8-98a5-4195-9b6e-3b4ead927db2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 NetworkManager[48899]: <info>  [1765599347.3637] manager: (tapbfdc82ee-30): new Veth device (/org/freedesktop/NetworkManager/Devices/84)
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.407 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[f6c28442-c324-4e2e-96b7-fb3f4cacc506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.411 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf11fd1-b0f9-4323-bcd3-b91d131bc59c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.428 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599347.4281933, 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.429 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] VM Started (Lifecycle Event)
Dec 13 04:15:47 compute-0 NetworkManager[48899]: <info>  [1765599347.4412] device (tapbfdc82ee-30): carrier: link connected
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.446 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[0612e4af-96fb-44ea-bd5c-27c3e44c955e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.447 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.452 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599347.4319394, 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.452 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] VM Paused (Lifecycle Event)
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.468 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.468 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[64a4bef2-8dc8-4fb0-91e3-4a954b5fa925]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdc82ee-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:93:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408611, 'reachable_time': 31109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262697, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.472 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.484 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6ca588c2-b3b0-43a8-be80-40c2b304c181]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:936f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 408611, 'tstamp': 408611}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262698, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 5.3 MiB/s wr, 221 op/s
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.489 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.493 243708 DEBUG nova.compute.manager [req-1513b4cd-0cb6-4d8c-a296-2855cc76ce89 req-26f86f07-a2c6-4755-b055-2cf228990aa9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.493 243708 DEBUG oslo_concurrency.lockutils [req-1513b4cd-0cb6-4d8c-a296-2855cc76ce89 req-26f86f07-a2c6-4755-b055-2cf228990aa9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.494 243708 DEBUG oslo_concurrency.lockutils [req-1513b4cd-0cb6-4d8c-a296-2855cc76ce89 req-26f86f07-a2c6-4755-b055-2cf228990aa9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.494 243708 DEBUG oslo_concurrency.lockutils [req-1513b4cd-0cb6-4d8c-a296-2855cc76ce89 req-26f86f07-a2c6-4755-b055-2cf228990aa9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.494 243708 DEBUG nova.compute.manager [req-1513b4cd-0cb6-4d8c-a296-2855cc76ce89 req-26f86f07-a2c6-4755-b055-2cf228990aa9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Processing event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.496 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.500 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599347.4989188, 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.501 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] VM Resumed (Lifecycle Event)
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.503 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.503 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[14ed3978-33eb-4437-90c9-ef97c05b1153]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdc82ee-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:93:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408611, 'reachable_time': 31109, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262699, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.507 243708 INFO nova.virt.libvirt.driver [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Instance spawned successfully.
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.508 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.512 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.521 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.528 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.535 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.536 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.536 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.536 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[88c044cc-7368-45a2-a369-fb73f2ce4640]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.536 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.537 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.538 243708 DEBUG nova.virt.libvirt.driver [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.561 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:15:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811419237' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811419237' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.594 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ca7ba0c3-748a-4f13-93d4-6ce9a1db1b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.596 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdc82ee-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.596 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.596 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfdc82ee-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:47 compute-0 kernel: tapbfdc82ee-30: entered promiscuous mode
Dec 13 04:15:47 compute-0 NetworkManager[48899]: <info>  [1765599347.5990] manager: (tapbfdc82ee-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/85)
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.599 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.600 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfdc82ee-30, col_values=(('external_ids', {'iface-id': '5b3ad63e-74a7-458d-893c-885bf85ae008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:47 compute-0 ovn_controller[145204]: 2025-12-13T04:15:47Z|00139|binding|INFO|Releasing lport 5b3ad63e-74a7-458d-893c-885bf85ae008 from this chassis (sb_readonly=0)
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.605 243708 INFO nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Took 6.72 seconds to spawn the instance on the hypervisor.
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.605 243708 DEBUG nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.625 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.626 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.627 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3ee865-a4f0-4c41-ade2-71c600dfa432]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.629 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:15:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:47.629 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'env', 'PROCESS_TAG=haproxy-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.665 243708 INFO nova.compute.manager [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Took 7.85 seconds to build instance.
Dec 13 04:15:47 compute-0 nova_compute[243704]: 2025-12-13 04:15:47.680 243708 DEBUG oslo_concurrency.lockutils [None req-87ef5dcc-b4c6-467b-bb0e-3fcae592c4b5 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.951s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:48 compute-0 podman[262728]: 2025-12-13 04:15:48.020827891 +0000 UTC m=+0.057354553 container create 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:15:48 compute-0 systemd[1]: Started libpod-conmon-82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6.scope.
Dec 13 04:15:48 compute-0 podman[262728]: 2025-12-13 04:15:47.992286789 +0000 UTC m=+0.028813451 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:15:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:15:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c29424d3277f542f36d6eb3a5dc1d325fac043c05ab3df01b09d992238b3085/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:15:48 compute-0 podman[262728]: 2025-12-13 04:15:48.137153737 +0000 UTC m=+0.173680419 container init 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 04:15:48 compute-0 podman[262728]: 2025-12-13 04:15:48.145112742 +0000 UTC m=+0.181639414 container start 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0)
Dec 13 04:15:48 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [NOTICE]   (262748) : New worker (262750) forked
Dec 13 04:15:48 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [NOTICE]   (262748) : Loading success.
Dec 13 04:15:48 compute-0 ceph-mon[75071]: osdmap e244: 3 total, 3 up, 3 in
Dec 13 04:15:48 compute-0 ceph-mon[75071]: pgmap v1197: 305 pgs: 305 active+clean; 213 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 155 KiB/s rd, 5.3 MiB/s wr, 221 op/s
Dec 13 04:15:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2811419237' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2811419237' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Dec 13 04:15:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Dec 13 04:15:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 59 KiB/s wr, 542 op/s
Dec 13 04:15:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.620 243708 DEBUG nova.compute.manager [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.621 243708 DEBUG oslo_concurrency.lockutils [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.621 243708 DEBUG oslo_concurrency.lockutils [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.621 243708 DEBUG oslo_concurrency.lockutils [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.622 243708 DEBUG nova.compute.manager [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] No waiting events found dispatching network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:15:49 compute-0 nova_compute[243704]: 2025-12-13 04:15:49.622 243708 WARNING nova.compute.manager [req-c3c1b144-751d-458b-9038-7270e1cc1009 req-6014485c-6a6f-4202-97d5-6d5c51f119f1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received unexpected event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 for instance with vm_state active and task_state None.
Dec 13 04:15:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731569736' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/731569736' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:50 compute-0 nova_compute[243704]: 2025-12-13 04:15:50.244 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:50 compute-0 ceph-mon[75071]: pgmap v1199: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 59 KiB/s wr, 542 op/s
Dec 13 04:15:50 compute-0 ceph-mon[75071]: osdmap e245: 3 total, 3 up, 3 in
Dec 13 04:15:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/731569736' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/731569736' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e245 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Dec 13 04:15:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Dec 13 04:15:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Dec 13 04:15:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:15:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134255858' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:15:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3134255858' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 45 KiB/s wr, 408 op/s
Dec 13 04:15:51 compute-0 nova_compute[243704]: 2025-12-13 04:15:51.741 243708 DEBUG nova.compute.manager [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-changed-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:15:51 compute-0 nova_compute[243704]: 2025-12-13 04:15:51.741 243708 DEBUG nova.compute.manager [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Refreshing instance network info cache due to event network-changed-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:15:51 compute-0 nova_compute[243704]: 2025-12-13 04:15:51.741 243708 DEBUG oslo_concurrency.lockutils [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:15:51 compute-0 nova_compute[243704]: 2025-12-13 04:15:51.741 243708 DEBUG oslo_concurrency.lockutils [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:15:51 compute-0 nova_compute[243704]: 2025-12-13 04:15:51.742 243708 DEBUG nova.network.neutron [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Refreshing network info cache for port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:15:52 compute-0 ceph-mon[75071]: osdmap e246: 3 total, 3 up, 3 in
Dec 13 04:15:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3134255858' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:15:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3134255858' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:15:52 compute-0 ceph-mon[75071]: pgmap v1201: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 45 KiB/s wr, 408 op/s
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00035339692859683193 of space, bias 1.0, pg target 0.10601907857904957 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011095539110902465 of space, bias 1.0, pg target 0.33286617332707397 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 9.112442860172046e-07 of space, bias 1.0, pg target 0.00027337328580516137 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666084781385162 of space, bias 1.0, pg target 0.19982543441554862 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3865910370057856e-06 of space, bias 4.0, pg target 0.0016639092444069427 quantized to 16 (current 16)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:15:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:15:52 compute-0 nova_compute[243704]: 2025-12-13 04:15:52.515 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:53 compute-0 nova_compute[243704]: 2025-12-13 04:15:53.474 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:53.474 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:15:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:53.476 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:15:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 38 KiB/s wr, 344 op/s
Dec 13 04:15:53 compute-0 nova_compute[243704]: 2025-12-13 04:15:53.711 243708 DEBUG nova.network.neutron [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updated VIF entry in instance network info cache for port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:15:53 compute-0 nova_compute[243704]: 2025-12-13 04:15:53.711 243708 DEBUG nova.network.neutron [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updating instance_info_cache with network_info: [{"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:15:53 compute-0 nova_compute[243704]: 2025-12-13 04:15:53.727 243708 DEBUG oslo_concurrency.lockutils [req-7bc62110-d0be-4755-8418-8baa0e90e163 req-33283d97-68cb-4b3b-86ba-4b8c3484ff8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-2aaef3c8-05f3-441e-b2ac-969ccd8305e3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:15:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:15:54.479 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:15:54 compute-0 ceph-mon[75071]: pgmap v1202: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 38 KiB/s wr, 344 op/s
Dec 13 04:15:55 compute-0 nova_compute[243704]: 2025-12-13 04:15:55.246 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 32 KiB/s wr, 319 op/s
Dec 13 04:15:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e246 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:15:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Dec 13 04:15:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Dec 13 04:15:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Dec 13 04:15:56 compute-0 ceph-mon[75071]: pgmap v1203: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 32 KiB/s wr, 319 op/s
Dec 13 04:15:56 compute-0 ceph-mon[75071]: osdmap e247: 3 total, 3 up, 3 in
Dec 13 04:15:56 compute-0 podman[262759]: 2025-12-13 04:15:56.949446505 +0000 UTC m=+0.092340889 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:15:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.9 KiB/s wr, 47 op/s
Dec 13 04:15:57 compute-0 nova_compute[243704]: 2025-12-13 04:15:57.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:15:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Dec 13 04:15:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Dec 13 04:15:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Dec 13 04:15:58 compute-0 ceph-mon[75071]: pgmap v1205: 305 pgs: 305 active+clean; 213 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 2.9 KiB/s wr, 47 op/s
Dec 13 04:15:58 compute-0 ceph-mon[75071]: osdmap e248: 3 total, 3 up, 3 in
Dec 13 04:15:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:15:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/331563114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 547 KiB/s wr, 104 op/s
Dec 13 04:15:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Dec 13 04:15:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/331563114' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:15:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Dec 13 04:15:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Dec 13 04:16:00 compute-0 nova_compute[243704]: 2025-12-13 04:16:00.248 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Dec 13 04:16:00 compute-0 ceph-mon[75071]: pgmap v1207: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 242 KiB/s rd, 547 KiB/s wr, 104 op/s
Dec 13 04:16:00 compute-0 ceph-mon[75071]: osdmap e249: 3 total, 3 up, 3 in
Dec 13 04:16:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Dec 13 04:16:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Dec 13 04:16:00 compute-0 ovn_controller[145204]: 2025-12-13T04:16:00Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ab:02:82 10.100.0.10
Dec 13 04:16:00 compute-0 ovn_controller[145204]: 2025-12-13T04:16:00Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ab:02:82 10.100.0.10
Dec 13 04:16:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e250 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1210: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 818 KiB/s wr, 85 op/s
Dec 13 04:16:01 compute-0 ceph-mon[75071]: osdmap e250: 3 total, 3 up, 3 in
Dec 13 04:16:02 compute-0 nova_compute[243704]: 2025-12-13 04:16:02.518 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:02 compute-0 ceph-mon[75071]: pgmap v1210: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 818 KiB/s wr, 85 op/s
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.311 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.311 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.329 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.401 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.402 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.412 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.413 243708 INFO nova.compute.claims [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:16:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 725 KiB/s wr, 75 op/s
Dec 13 04:16:03 compute-0 nova_compute[243704]: 2025-12-13 04:16:03.551 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Dec 13 04:16:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Dec 13 04:16:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Dec 13 04:16:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1458354557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.084 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.092 243708 DEBUG nova.compute.provider_tree [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.104 243708 DEBUG nova.scheduler.client.report [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.122 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.123 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.164 243708 INFO nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.166 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.167 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.193 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.236 243708 INFO nova.virt.block_device [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Booting with volume snapshot 4e95c84a-2941-4603-91f6-b6c84d9f35ad at /dev/vda
Dec 13 04:16:04 compute-0 nova_compute[243704]: 2025-12-13 04:16:04.354 243708 DEBUG nova.policy [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:16:04 compute-0 ceph-mon[75071]: pgmap v1211: 305 pgs: 305 active+clean; 217 MiB data, 342 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 725 KiB/s wr, 75 op/s
Dec 13 04:16:04 compute-0 ceph-mon[75071]: osdmap e251: 3 total, 3 up, 3 in
Dec 13 04:16:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1458354557' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:05 compute-0 nova_compute[243704]: 2025-12-13 04:16:05.251 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 3.6 MiB/s wr, 164 op/s
Dec 13 04:16:05 compute-0 nova_compute[243704]: 2025-12-13 04:16:05.585 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Successfully created port: a473f08f-b0da-4f57-b165-cda72ecc69a8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:16:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674921270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3674921270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.195 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Successfully updated port: a473f08f-b0da-4f57-b165-cda72ecc69a8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.221 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.221 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.222 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.313 243708 DEBUG nova.compute.manager [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-changed-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.314 243708 DEBUG nova.compute.manager [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Refreshing instance network info cache due to event network-changed-a473f08f-b0da-4f57-b165-cda72ecc69a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.315 243708 DEBUG oslo_concurrency.lockutils [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:16:06 compute-0 nova_compute[243704]: 2025-12-13 04:16:06.409 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:16:06 compute-0 ceph-mon[75071]: pgmap v1213: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 463 KiB/s rd, 3.6 MiB/s wr, 164 op/s
Dec 13 04:16:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3674921270' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3674921270' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295860701' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/295860701' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.318 243708 DEBUG nova.network.neutron [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updating instance_info_cache with network_info: [{"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.340 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.340 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Instance network_info: |[{"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.340 243708 DEBUG oslo_concurrency.lockutils [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.341 243708 DEBUG nova.network.neutron [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Refreshing network info cache for port a473f08f-b0da-4f57-b165-cda72ecc69a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:16:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 2.7 MiB/s wr, 124 op/s
Dec 13 04:16:07 compute-0 nova_compute[243704]: 2025-12-13 04:16:07.521 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:07 compute-0 podman[262807]: 2025-12-13 04:16:07.916640968 +0000 UTC m=+0.060755345 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:16:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/295860701' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/295860701' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.247 243708 DEBUG os_brick.utils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.248 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.261 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.261 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e887d2d7-646d-4f0d-9348-1c2d19d8f70f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.262 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.271 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.271 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cecdd3aa-35b0-43d6-990c-097ef18ac365]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.273 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.284 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.284 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6af8128f-b64e-44dc-8a3d-25513c99855d]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.286 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c53f6e-a965-4594-9394-f7556829930c]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.286 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.313 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.316 243708 DEBUG os_brick.initiator.connectors.lightos [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.317 243708 DEBUG os_brick.initiator.connectors.lightos [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.317 243708 DEBUG os_brick.initiator.connectors.lightos [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.318 243708 DEBUG os_brick.utils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:16:08 compute-0 nova_compute[243704]: 2025-12-13 04:16:08.318 243708 DEBUG nova.virt.block_device [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updating existing volume attachment record: 200c958e-d8ff-4222-a0d7-04c72f9f6451 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:16:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/756224882' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/756224882' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:08 compute-0 ceph-mon[75071]: pgmap v1214: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 351 KiB/s rd, 2.7 MiB/s wr, 124 op/s
Dec 13 04:16:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/756224882' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/756224882' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/786180131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.391 243708 DEBUG nova.network.neutron [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updated VIF entry in instance network info cache for port a473f08f-b0da-4f57-b165-cda72ecc69a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.392 243708 DEBUG nova.network.neutron [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updating instance_info_cache with network_info: [{"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.429 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.430 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.431 243708 INFO nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Creating image(s)
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.431 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.432 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Ensure instance console log exists: /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.432 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.432 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.433 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.435 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Start _get_guest_xml network_info=[{"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-13T04:15:57Z,direct_url=<?>,disk_format='qcow2',id=651f53de-db0a-4cb7-9fa7-a760e9acfe9e,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-659927317',owner='27927978f9684df1a72cecb32505e93b',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-13T04:15:57Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2a588a76-f680-4e66-9293-f7c13af1bbe9', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2a588a76-f680-4e66-9293-f7c13af1bbe9', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '83d8a96f-501e-4c0b-aed8-2099abf55b94', 'attached_at': '', 'detached_at': '', 'volume_id': '2a588a76-f680-4e66-9293-f7c13af1bbe9', 'serial': '2a588a76-f680-4e66-9293-f7c13af1bbe9'}, 'disk_bus': 'virtio', 'attachment_id': '200c958e-d8ff-4222-a0d7-04c72f9f6451', 'device_type': 'disk', 'delete_on_termination': True, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.440 243708 WARNING nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.480 243708 DEBUG oslo_concurrency.lockutils [req-ad4f8aa9-f2aa-4686-832d-9ada0de6503e req-5b836780-b73d-4a72-a373-7ba470c903a5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.482 243708 DEBUG nova.virt.libvirt.host [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.483 243708 DEBUG nova.virt.libvirt.host [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.487 243708 DEBUG nova.virt.libvirt.host [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.487 243708 DEBUG nova.virt.libvirt.host [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.488 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.488 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='d41d8cd98f00b204e9800998ecf8427e',container_format='bare',created_at=2025-12-13T04:15:57Z,direct_url=<?>,disk_format='qcow2',id=651f53de-db0a-4cb7-9fa7-a760e9acfe9e,min_disk=1,min_ram=0,name='tempest-TestVolumeBootPatternsnapshot-659927317',owner='27927978f9684df1a72cecb32505e93b',properties=ImageMetaProps,protected=<?>,size=0,status='active',tags=<?>,updated_at=2025-12-13T04:15:57Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.489 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.489 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.490 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.490 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.490 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.491 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.491 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.492 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.492 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.492 243708 DEBUG nova.virt.hardware [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:16:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.4 MiB/s wr, 184 op/s
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.523 243708 DEBUG nova.storage.rbd_utils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:16:09 compute-0 nova_compute[243704]: 2025-12-13 04:16:09.529 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2185841349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2185841349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/786180131' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2185841349' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2185841349' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2502040292' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.097 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.568s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.124 243708 DEBUG nova.virt.libvirt.vif [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:16:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-782771131',id=14,image_ref='651f53de-db0a-4cb7-9fa7-a760e9acfe9e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHAHwcQsG0mq0B6zDm1P2JmV0qZHUx42rz2Ur40ayveosdP8UqDZ1iSQhU1MtqmgtPFIXr/WR/MzP0cUIuReWE77iI+Uo5KnFVsmoHYk0k6bPkxnfBA0F02V5c2ItG0FIQ==',key_name='tempest-keypair-1178564669',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-b0uyddx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-236547311',image_owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:16:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=83d8a96f-501e-4c0b-aed8-2099abf55b94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.125 243708 DEBUG nova.network.os_vif_util [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.125 243708 DEBUG nova.network.os_vif_util [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.127 243708 DEBUG nova.objects.instance [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 83d8a96f-501e-4c0b-aed8-2099abf55b94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.142 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <uuid>83d8a96f-501e-4c0b-aed8-2099abf55b94</uuid>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <name>instance-0000000e</name>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-image-snapshot-server-782771131</nova:name>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:16:09</nova:creationTime>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="651f53de-db0a-4cb7-9fa7-a760e9acfe9e"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <nova:port uuid="a473f08f-b0da-4f57-b165-cda72ecc69a8">
Dec 13 04:16:10 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <system>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="serial">83d8a96f-501e-4c0b-aed8-2099abf55b94</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="uuid">83d8a96f-501e-4c0b-aed8-2099abf55b94</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </system>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <os>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </os>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <features>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </features>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config">
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </source>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-2a588a76-f680-4e66-9293-f7c13af1bbe9">
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </source>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:16:10 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <serial>2a588a76-f680-4e66-9293-f7c13af1bbe9</serial>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:85:04:82"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <target dev="tapa473f08f-b0"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/console.log" append="off"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <video>
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </video>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <input type="keyboard" bus="usb"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:16:10 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:16:10 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:16:10 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:16:10 compute-0 nova_compute[243704]: </domain>
Dec 13 04:16:10 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.144 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Preparing to wait for external event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.144 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.145 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.145 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.146 243708 DEBUG nova.virt.libvirt.vif [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:16:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-782771131',id=14,image_ref='651f53de-db0a-4cb7-9fa7-a760e9acfe9e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHAHwcQsG0mq0B6zDm1P2JmV0qZHUx42rz2Ur40ayveosdP8UqDZ1iSQhU1MtqmgtPFIXr/WR/MzP0cUIuReWE77iI+Uo5KnFVsmoHYk0k6bPkxnfBA0F02V5c2ItG0FIQ==',key_name='tempest-keypair-1178564669',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-b0uyddx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-236547311',image_owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:16:04Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=83d8a96f-501e-4c0b-aed8-2099abf55b94,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.146 243708 DEBUG nova.network.os_vif_util [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.147 243708 DEBUG nova.network.os_vif_util [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.148 243708 DEBUG os_vif [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.149 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.149 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.150 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.153 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.154 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa473f08f-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.154 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa473f08f-b0, col_values=(('external_ids', {'iface-id': 'a473f08f-b0da-4f57-b165-cda72ecc69a8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:85:04:82', 'vm-uuid': '83d8a96f-501e-4c0b-aed8-2099abf55b94'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.156 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:10 compute-0 NetworkManager[48899]: <info>  [1765599370.1577] manager: (tapa473f08f-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.158 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.165 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.166 243708 INFO os_vif [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0')
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.218 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.219 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.219 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:85:04:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.220 243708 INFO nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Using config drive
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.240 243708 DEBUG nova.storage.rbd_utils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.651 243708 INFO nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Creating config drive at /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.657 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkomtop7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.793 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzkomtop7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.830 243708 DEBUG nova.storage.rbd_utils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.837 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config 83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:10 compute-0 ceph-mon[75071]: pgmap v1215: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 366 KiB/s rd, 2.4 MiB/s wr, 184 op/s
Dec 13 04:16:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2502040292' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.981 243708 DEBUG oslo_concurrency.processutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config 83d8a96f-501e-4c0b-aed8-2099abf55b94_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:10 compute-0 nova_compute[243704]: 2025-12-13 04:16:10.983 243708 INFO nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Deleting local config drive /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94/disk.config because it was imported into RBD.
Dec 13 04:16:11 compute-0 NetworkManager[48899]: <info>  [1765599371.0428] manager: (tapa473f08f-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/87)
Dec 13 04:16:11 compute-0 kernel: tapa473f08f-b0: entered promiscuous mode
Dec 13 04:16:11 compute-0 ovn_controller[145204]: 2025-12-13T04:16:11Z|00140|binding|INFO|Claiming lport a473f08f-b0da-4f57-b165-cda72ecc69a8 for this chassis.
Dec 13 04:16:11 compute-0 ovn_controller[145204]: 2025-12-13T04:16:11Z|00141|binding|INFO|a473f08f-b0da-4f57-b165-cda72ecc69a8: Claiming fa:16:3e:85:04:82 10.100.0.6
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.046 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.054 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:04:82 10.100.0.6'], port_security=['fa:16:3e:85:04:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '83d8a96f-501e-4c0b-aed8-2099abf55b94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8d81fb53-f8e9-4a14-8f6f-c86adc369008', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=a473f08f-b0da-4f57-b165-cda72ecc69a8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.055 154842 INFO neutron.agent.ovn.metadata.agent [-] Port a473f08f-b0da-4f57-b165-cda72ecc69a8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.056 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:16:11 compute-0 ovn_controller[145204]: 2025-12-13T04:16:11Z|00142|binding|INFO|Setting lport a473f08f-b0da-4f57-b165-cda72ecc69a8 ovn-installed in OVS
Dec 13 04:16:11 compute-0 ovn_controller[145204]: 2025-12-13T04:16:11Z|00143|binding|INFO|Setting lport a473f08f-b0da-4f57-b165-cda72ecc69a8 up in Southbound
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.070 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.073 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ba3add3e-8908-4317-a00b-c386e33d06e9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 systemd-udevd[262948]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:16:11 compute-0 systemd-machined[206767]: New machine qemu-14-instance-0000000e.
Dec 13 04:16:11 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000e.
Dec 13 04:16:11 compute-0 NetworkManager[48899]: <info>  [1765599371.1005] device (tapa473f08f-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:16:11 compute-0 NetworkManager[48899]: <info>  [1765599371.1015] device (tapa473f08f-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.108 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[a250cf12-05c6-416d-b080-52aafe84abc5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.111 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[75d1d447-851e-482b-84b4-e2c0f7ca6427]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052432496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4052432496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.139 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d804e1-ae0c-446f-a765-0a037986821e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.157 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[978aacb3-a6ae-44c9-9d8b-99df4b5aa882]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 616, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405774, 'reachable_time': 33450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262959, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Dec 13 04:16:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.174 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[aa84c79a-dbfa-4c48-971d-e0e9c846886e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405791, 'tstamp': 405791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262961, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405794, 'tstamp': 405794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262961, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.176 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.178 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.180 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.180 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.180 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.181 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:11.181 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.248 243708 DEBUG nova.compute.manager [req-44eb328a-d024-4364-a04e-adb6ad4ce113 req-1854ac8d-33ac-4d3c-98db-e5dcd580cf30 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.248 243708 DEBUG oslo_concurrency.lockutils [req-44eb328a-d024-4364-a04e-adb6ad4ce113 req-1854ac8d-33ac-4d3c-98db-e5dcd580cf30 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.249 243708 DEBUG oslo_concurrency.lockutils [req-44eb328a-d024-4364-a04e-adb6ad4ce113 req-1854ac8d-33ac-4d3c-98db-e5dcd580cf30 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.249 243708 DEBUG oslo_concurrency.lockutils [req-44eb328a-d024-4364-a04e-adb6ad4ce113 req-1854ac8d-33ac-4d3c-98db-e5dcd580cf30 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.249 243708 DEBUG nova.compute.manager [req-44eb328a-d024-4364-a04e-adb6ad4ce113 req-1854ac8d-33ac-4d3c-98db-e5dcd580cf30 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Processing event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:16:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.7 MiB/s wr, 204 op/s
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.632 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.633 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599371.631105, 83d8a96f-501e-4c0b-aed8-2099abf55b94 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.633 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] VM Started (Lifecycle Event)
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.636 243708 DEBUG nova.virt.libvirt.driver [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.639 243708 INFO nova.virt.libvirt.driver [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Instance spawned successfully.
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.640 243708 INFO nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Took 2.21 seconds to spawn the instance on the hypervisor.
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.640 243708 DEBUG nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.670 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.673 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.697 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.697 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599371.6314318, 83d8a96f-501e-4c0b-aed8-2099abf55b94 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.698 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] VM Paused (Lifecycle Event)
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.715 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.717 243708 INFO nova.compute.manager [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Took 8.34 seconds to build instance.
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.720 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599371.6352632, 83d8a96f-501e-4c0b-aed8-2099abf55b94 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.721 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] VM Resumed (Lifecycle Event)
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.737 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.739 243708 DEBUG oslo_concurrency.lockutils [None req-196929a7-b5b6-4472-aa73-0278c16b4cfd 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.740 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.871 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.875 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:16:11 compute-0 nova_compute[243704]: 2025-12-13 04:16:11.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:16:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4052432496' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4052432496' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:11 compute-0 ceph-mon[75071]: osdmap e252: 3 total, 3 up, 3 in
Dec 13 04:16:12 compute-0 nova_compute[243704]: 2025-12-13 04:16:12.168 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:16:12 compute-0 nova_compute[243704]: 2025-12-13 04:16:12.169 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:16:12 compute-0 nova_compute[243704]: 2025-12-13 04:16:12.170 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:16:12 compute-0 nova_compute[243704]: 2025-12-13 04:16:12.170 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:12 compute-0 nova_compute[243704]: 2025-12-13 04:16:12.523 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:12 compute-0 ceph-mon[75071]: pgmap v1217: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 407 KiB/s rd, 2.7 MiB/s wr, 204 op/s
Dec 13 04:16:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Dec 13 04:16:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Dec 13 04:16:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.373 243708 DEBUG nova.compute.manager [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.374 243708 DEBUG oslo_concurrency.lockutils [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.374 243708 DEBUG oslo_concurrency.lockutils [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.375 243708 DEBUG oslo_concurrency.lockutils [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.375 243708 DEBUG nova.compute.manager [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] No waiting events found dispatching network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:13 compute-0 nova_compute[243704]: 2025-12-13 04:16:13.375 243708 WARNING nova.compute.manager [req-efe8f538-7556-4b99-850a-7d3129a088f1 req-281c7fa9-f6b0-4e54-a6ac-580a1080ae9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received unexpected event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 for instance with vm_state active and task_state None.
Dec 13 04:16:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 20 KiB/s wr, 81 op/s
Dec 13 04:16:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635723754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1635723754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.124 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating instance_info_cache with network_info: [{"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.138 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-49ec6453-af58-4bf0-89f5-4faf5d3a92c5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.139 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.140 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.174 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.175 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.175 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.176 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.176 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:14 compute-0 ceph-mon[75071]: osdmap e253: 3 total, 3 up, 3 in
Dec 13 04:16:14 compute-0 ceph-mon[75071]: pgmap v1219: 305 pgs: 305 active+clean; 246 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 20 KiB/s wr, 81 op/s
Dec 13 04:16:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1635723754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1635723754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281001288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.743 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.854 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.854 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.857 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.858 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.861 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:14 compute-0 nova_compute[243704]: 2025-12-13 04:16:14.861 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000000c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.037 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.039 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4024MB free_disk=59.942455008625984GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.039 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.040 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.107 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.108 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.108 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 83d8a96f-501e-4c0b-aed8-2099abf55b94 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.108 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.108 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.157 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.159 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4281001288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 49 KiB/s wr, 264 op/s
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.535 243708 DEBUG nova.compute.manager [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-changed-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.536 243708 DEBUG nova.compute.manager [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Refreshing instance network info cache due to event network-changed-a473f08f-b0da-4f57-b165-cda72ecc69a8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.536 243708 DEBUG oslo_concurrency.lockutils [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.537 243708 DEBUG oslo_concurrency.lockutils [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.537 243708 DEBUG nova.network.neutron [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Refreshing network info cache for port a473f08f-b0da-4f57-b165-cda72ecc69a8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:16:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1139242564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.732 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.574s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.737 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.750 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.779 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:16:15 compute-0 nova_compute[243704]: 2025-12-13 04:16:15.780 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:15 compute-0 podman[263050]: 2025-12-13 04:16:15.922170416 +0000 UTC m=+0.058195005 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:16:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e253 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.180561) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376180682, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1405, "num_deletes": 264, "total_data_size": 1875086, "memory_usage": 1906248, "flush_reason": "Manual Compaction"}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376194304, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1828708, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23710, "largest_seqno": 25114, "table_properties": {"data_size": 1821870, "index_size": 3913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14661, "raw_average_key_size": 20, "raw_value_size": 1807910, "raw_average_value_size": 2490, "num_data_blocks": 172, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 264, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599297, "oldest_key_time": 1765599297, "file_creation_time": 1765599376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 13768 microseconds, and 6797 cpu microseconds.
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.194341) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1828708 bytes OK
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.194363) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.195686) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.195700) EVENT_LOG_v1 {"time_micros": 1765599376195695, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.195719) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1868575, prev total WAL file size 1868575, number of live WAL files 2.
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.196616) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373537' seq:0, type:0; will stop at (end)
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1785KB)], [53(9131KB)]
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376196769, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 11179830, "oldest_snapshot_seqno": -1}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5333 keys, 11079008 bytes, temperature: kUnknown
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376426327, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 11079008, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11037305, "index_size": 27257, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 132127, "raw_average_key_size": 24, "raw_value_size": 10935349, "raw_average_value_size": 2050, "num_data_blocks": 1129, "num_entries": 5333, "num_filter_entries": 5333, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599376, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:16:16 compute-0 ceph-mon[75071]: pgmap v1220: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 49 KiB/s wr, 264 op/s
Dec 13 04:16:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1139242564' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.426586) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 11079008 bytes
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.428364) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 48.7 rd, 48.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 8.9 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(12.2) write-amplify(6.1) OK, records in: 5873, records dropped: 540 output_compression: NoCompression
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.428386) EVENT_LOG_v1 {"time_micros": 1765599376428375, "job": 28, "event": "compaction_finished", "compaction_time_micros": 229638, "compaction_time_cpu_micros": 47054, "output_level": 6, "num_output_files": 1, "total_output_size": 11079008, "num_input_records": 5873, "num_output_records": 5333, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376428906, "job": 28, "event": "table_file_deletion", "file_number": 55}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599376430632, "job": 28, "event": "table_file_deletion", "file_number": 53}
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.196449) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.430703) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.430717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.430719) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.430721) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:16:16.430722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.517 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.518 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.518 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.935 243708 DEBUG nova.network.neutron [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updated VIF entry in instance network info cache for port a473f08f-b0da-4f57-b165-cda72ecc69a8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.935 243708 DEBUG nova.network.neutron [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updating instance_info_cache with network_info: [{"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:16 compute-0 nova_compute[243704]: 2025-12-13 04:16:16.956 243708 DEBUG oslo_concurrency.lockutils [req-7fd977ce-3271-4cc1-8d25-68cc4f1d68b6 req-fc1e29fb-6d4c-4c96-8112-af17c29cca7e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-83d8a96f-501e-4c0b-aed8-2099abf55b94" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:16:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1221: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 183 op/s
Dec 13 04:16:17 compute-0 nova_compute[243704]: 2025-12-13 04:16:17.528 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Dec 13 04:16:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Dec 13 04:16:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Dec 13 04:16:18 compute-0 ceph-mon[75071]: pgmap v1221: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 29 KiB/s wr, 183 op/s
Dec 13 04:16:18 compute-0 nova_compute[243704]: 2025-12-13 04:16:18.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:18 compute-0 nova_compute[243704]: 2025-12-13 04:16:18.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:18 compute-0 nova_compute[243704]: 2025-12-13 04:16:18.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:16:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 196 op/s
Dec 13 04:16:19 compute-0 ceph-mon[75071]: osdmap e254: 3 total, 3 up, 3 in
Dec 13 04:16:19 compute-0 nova_compute[243704]: 2025-12-13 04:16:19.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:16:20 compute-0 nova_compute[243704]: 2025-12-13 04:16:20.158 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:20 compute-0 ceph-mon[75071]: pgmap v1223: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 30 KiB/s wr, 196 op/s
Dec 13 04:16:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e254 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Dec 13 04:16:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Dec 13 04:16:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Dec 13 04:16:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 196 op/s
Dec 13 04:16:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3455088661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3455088661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:22 compute-0 ceph-mon[75071]: osdmap e255: 3 total, 3 up, 3 in
Dec 13 04:16:22 compute-0 ceph-mon[75071]: pgmap v1225: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 30 KiB/s wr, 196 op/s
Dec 13 04:16:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3455088661' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3455088661' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.530 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.617 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.617 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.635 243708 DEBUG nova.objects.instance [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.653 243708 INFO nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Ignoring supplied device name: /dev/vdb
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.663 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.861 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.861 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.862 243708 INFO nova.compute.manager [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Attaching volume be93d68d-4f48-4350-8298-b86f5b683fe4 to /dev/vdb
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.987 243708 DEBUG os_brick.utils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:16:22 compute-0 nova_compute[243704]: 2025-12-13 04:16:22.988 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.003 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.004 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[513dd4bf-9cf5-40de-997b-3ff49f6eb64c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.005 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.015 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.016 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[918bd8c5-085d-4e03-99ae-1899a52d6a50]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.017 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.027 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.028 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6eae3c89-84b5-4d7b-a29a-f285490fe625]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.029 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[599613ea-8d67-460e-8732-5de0adcda44b]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.029 243708 DEBUG oslo_concurrency.processutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.053 243708 DEBUG oslo_concurrency.processutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.055 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.055 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.056 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.056 243708 DEBUG os_brick.utils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] <== get_connector_properties: return (68ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.056 243708 DEBUG nova.virt.block_device [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updating existing volume attachment record: 9f5908b3-1df0-48e0-8b00-151a61dda917 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:16:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Dec 13 04:16:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Dec 13 04:16:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Dec 13 04:16:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Dec 13 04:16:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1691421136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.829 243708 DEBUG nova.objects.instance [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.864 243708 DEBUG nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Attempting to attach volume be93d68d-4f48-4350-8298-b86f5b683fe4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.868 243708 DEBUG nova.virt.libvirt.guest [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:16:23 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-be93d68d-4f48-4350-8298-b86f5b683fe4">
Dec 13 04:16:23 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   </source>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:16:23 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:16:23 compute-0 nova_compute[243704]:   <serial>be93d68d-4f48-4350-8298-b86f5b683fe4</serial>
Dec 13 04:16:23 compute-0 nova_compute[243704]: </disk>
Dec 13 04:16:23 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.965 243708 DEBUG nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.966 243708 DEBUG nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.966 243708 DEBUG nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:16:23 compute-0 nova_compute[243704]: 2025-12-13 04:16:23.966 243708 DEBUG nova.virt.libvirt.driver [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No VIF found with MAC fa:16:3e:ab:02:82, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:16:24 compute-0 nova_compute[243704]: 2025-12-13 04:16:24.134 243708 DEBUG oslo_concurrency.lockutils [None req-2aefbb05-8ae2-4e0d-9ee7-a864041afc62 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Dec 13 04:16:24 compute-0 ceph-mon[75071]: osdmap e256: 3 total, 3 up, 3 in
Dec 13 04:16:24 compute-0 ceph-mon[75071]: pgmap v1227: 305 pgs: 305 active+clean; 247 MiB data, 367 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.0 KiB/s wr, 17 op/s
Dec 13 04:16:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1691421136' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Dec 13 04:16:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Dec 13 04:16:25 compute-0 nova_compute[243704]: 2025-12-13 04:16:25.160 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:25 compute-0 ceph-mon[75071]: osdmap e257: 3 total, 3 up, 3 in
Dec 13 04:16:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 249 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 61 KiB/s wr, 152 op/s
Dec 13 04:16:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3795471509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:26 compute-0 ovn_controller[145204]: 2025-12-13T04:16:26Z|00024|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Dec 13 04:16:26 compute-0 ovn_controller[145204]: 2025-12-13T04:16:26Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:85:04:82 10.100.0.6
Dec 13 04:16:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:26 compute-0 ceph-mon[75071]: pgmap v1229: 305 pgs: 305 active+clean; 249 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 61 KiB/s wr, 152 op/s
Dec 13 04:16:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3795471509' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Dec 13 04:16:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Dec 13 04:16:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Dec 13 04:16:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 249 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 61 KiB/s wr, 152 op/s
Dec 13 04:16:27 compute-0 nova_compute[243704]: 2025-12-13 04:16:27.533 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:27 compute-0 podman[263097]: 2025-12-13 04:16:27.985939168 +0000 UTC m=+0.116565144 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:16:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Dec 13 04:16:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Dec 13 04:16:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Dec 13 04:16:28 compute-0 ceph-mon[75071]: osdmap e258: 3 total, 3 up, 3 in
Dec 13 04:16:28 compute-0 ceph-mon[75071]: pgmap v1231: 305 pgs: 305 active+clean; 249 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 201 KiB/s rd, 61 KiB/s wr, 152 op/s
Dec 13 04:16:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300723697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3300723697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Dec 13 04:16:29 compute-0 ceph-mon[75071]: osdmap e259: 3 total, 3 up, 3 in
Dec 13 04:16:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3300723697' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3300723697' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Dec 13 04:16:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Dec 13 04:16:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 340 op/s
Dec 13 04:16:30 compute-0 ovn_controller[145204]: 2025-12-13T04:16:30Z|00026|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Dec 13 04:16:30 compute-0 ovn_controller[145204]: 2025-12-13T04:16:30Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:85:04:82 10.100.0.6
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.164 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Dec 13 04:16:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Dec 13 04:16:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Dec 13 04:16:30 compute-0 ceph-mon[75071]: osdmap e260: 3 total, 3 up, 3 in
Dec 13 04:16:30 compute-0 ceph-mon[75071]: pgmap v1234: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 340 op/s
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.464 243708 DEBUG oslo_concurrency.lockutils [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.465 243708 DEBUG oslo_concurrency.lockutils [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.476 243708 INFO nova.compute.manager [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Detaching volume be93d68d-4f48-4350-8298-b86f5b683fe4
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.608 243708 INFO nova.virt.block_device [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Attempting to driver detach volume be93d68d-4f48-4350-8298-b86f5b683fe4 from mountpoint /dev/vdb
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.624 243708 DEBUG nova.virt.libvirt.driver [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Attempting to detach device vdb from instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.626 243708 DEBUG nova.virt.libvirt.guest [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-be93d68d-4f48-4350-8298-b86f5b683fe4">
Dec 13 04:16:30 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   </source>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <serial>be93d68d-4f48-4350-8298-b86f5b683fe4</serial>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]: </disk>
Dec 13 04:16:30 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.635 243708 INFO nova.virt.libvirt.driver [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully detached device vdb from instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 from the persistent domain config.
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.635 243708 DEBUG nova.virt.libvirt.driver [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.636 243708 DEBUG nova.virt.libvirt.guest [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-be93d68d-4f48-4350-8298-b86f5b683fe4">
Dec 13 04:16:30 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   </source>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <serial>be93d68d-4f48-4350-8298-b86f5b683fe4</serial>
Dec 13 04:16:30 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:16:30 compute-0 nova_compute[243704]: </disk>
Dec 13 04:16:30 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.785 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599390.7851384, 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.789 243708 DEBUG nova.virt.libvirt.driver [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.792 243708 INFO nova.virt.libvirt.driver [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully detached device vdb from instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 from the live domain config.
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.938 243708 DEBUG nova.objects.instance [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:30 compute-0 nova_compute[243704]: 2025-12-13 04:16:30.968 243708 DEBUG oslo_concurrency.lockutils [None req-02cafc9b-08df-444f-9bf9-60af6f455c0d 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.503s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:31 compute-0 ovn_controller[145204]: 2025-12-13T04:16:31Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:85:04:82 10.100.0.6
Dec 13 04:16:31 compute-0 ovn_controller[145204]: 2025-12-13T04:16:31Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:85:04:82 10.100.0.6
Dec 13 04:16:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e261 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Dec 13 04:16:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Dec 13 04:16:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Dec 13 04:16:31 compute-0 ceph-mon[75071]: osdmap e261: 3 total, 3 up, 3 in
Dec 13 04:16:31 compute-0 ceph-mon[75071]: osdmap e262: 3 total, 3 up, 3 in
Dec 13 04:16:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 420 op/s
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.614 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.615 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.615 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.615 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.616 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.617 243708 INFO nova.compute.manager [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Terminating instance
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.619 243708 DEBUG nova.compute.manager [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:16:31 compute-0 kernel: tapf9a6c5d4-c4 (unregistering): left promiscuous mode
Dec 13 04:16:31 compute-0 NetworkManager[48899]: <info>  [1765599391.6749] device (tapf9a6c5d4-c4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.716 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:31 compute-0 ovn_controller[145204]: 2025-12-13T04:16:31Z|00144|binding|INFO|Releasing lport f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 from this chassis (sb_readonly=0)
Dec 13 04:16:31 compute-0 ovn_controller[145204]: 2025-12-13T04:16:31Z|00145|binding|INFO|Setting lport f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 down in Southbound
Dec 13 04:16:31 compute-0 ovn_controller[145204]: 2025-12-13T04:16:31Z|00146|binding|INFO|Removing iface tapf9a6c5d4-c4 ovn-installed in OVS
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.723 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:31.729 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ab:02:82 10.100.0.10'], port_security=['fa:16:3e:ab:02:82 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '2aaef3c8-05f3-441e-b2ac-969ccd8305e3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b33a6e6b-398f-4a16-8a3e-aaf31f2da471', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.190'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8076cdc-415f-401f-a0fe-b3be303ae9cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:16:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:31.731 154842 INFO neutron.agent.ovn.metadata.agent [-] Port f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 in datapath bfdc82ee-37dc-4f9b-b711-c6c9f87b443a unbound from our chassis
Dec 13 04:16:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:31.733 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:16:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:31.735 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[66c5936b-e2d3-42e4-874e-ee39126a3e2b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:31.737 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a namespace which is not needed anymore
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.740 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Dec 13 04:16:31 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000d.scope: Consumed 14.409s CPU time.
Dec 13 04:16:31 compute-0 systemd-machined[206767]: Machine qemu-13-instance-0000000d terminated.
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.870 243708 INFO nova.virt.libvirt.driver [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Instance destroyed successfully.
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.871 243708 DEBUG nova.objects.instance [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'resources' on Instance uuid 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.880 243708 DEBUG nova.virt.libvirt.vif [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:15:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1686365483',display_name='tempest-VolumesBackupsTest-instance-1686365483',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1686365483',id=13,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPcxCnDKalj3ZCx/r34Y9K6dnne776++fIGdwWplVqohQ3I/DHmtuoRJVigp3qXQGgRVDeVYbjo/YmXKC5Vi6CRSLI5U6WXwqiLwzp0VRz3IkODIcMIPRXgl6Zvzk6LmZA==',key_name='tempest-keypair-472346263',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:15:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-9nlko5fq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:15:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=2aaef3c8-05f3-441e-b2ac-969ccd8305e3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.882 243708 DEBUG nova.network.os_vif_util [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "address": "fa:16:3e:ab:02:82", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.190", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf9a6c5d4-c4", "ovs_interfaceid": "f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.883 243708 DEBUG nova.network.os_vif_util [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.883 243708 DEBUG os_vif [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [NOTICE]   (262748) : haproxy version is 2.8.14-c23fe91
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [NOTICE]   (262748) : path to executable is /usr/sbin/haproxy
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [WARNING]  (262748) : Exiting Master process...
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [WARNING]  (262748) : Exiting Master process...
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [ALERT]    (262748) : Current worker (262750) exited with code 143 (Terminated)
Dec 13 04:16:31 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[262744]: [WARNING]  (262748) : All workers exited. Exiting... (0)
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.892 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.892 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf9a6c5d4-c4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:31 compute-0 systemd[1]: libpod-82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6.scope: Deactivated successfully.
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.896 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.899 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:16:31 compute-0 nova_compute[243704]: 2025-12-13 04:16:31.900 243708 INFO os_vif [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ab:02:82,bridge_name='br-int',has_traffic_filtering=True,id=f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf9a6c5d4-c4')
Dec 13 04:16:31 compute-0 podman[263151]: 2025-12-13 04:16:31.901891693 +0000 UTC m=+0.048783611 container died 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true)
Dec 13 04:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6-userdata-shm.mount: Deactivated successfully.
Dec 13 04:16:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-7c29424d3277f542f36d6eb3a5dc1d325fac043c05ab3df01b09d992238b3085-merged.mount: Deactivated successfully.
Dec 13 04:16:31 compute-0 podman[263151]: 2025-12-13 04:16:31.960258632 +0000 UTC m=+0.107150540 container cleanup 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 04:16:31 compute-0 systemd[1]: libpod-conmon-82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6.scope: Deactivated successfully.
Dec 13 04:16:32 compute-0 podman[263205]: 2025-12-13 04:16:32.206742888 +0000 UTC m=+0.219264792 container remove 82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.217 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbd98af-20e5-4ac1-9b19-dcb1edc4e8ec]: (4, ('Sat Dec 13 04:16:31 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a (82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6)\n82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6\nSat Dec 13 04:16:31 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a (82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6)\n82d3884d39285d97b5d47acf65482e1a73393da71b51ff95f4d1c189ee5ca9d6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.219 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[265d4541-69fd-4bca-bcb8-58e1c9f589c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.220 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdc82ee-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:32 compute-0 kernel: tapbfdc82ee-30: left promiscuous mode
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.221 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.223 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.227 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[10640fa8-9b08-44b7-8571-e4d059fc7cee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.243 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.242 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[81926b64-065b-4327-b75b-0c48827ee4f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.245 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[79ee07e1-b17e-4009-9c5d-ac3c5650e523]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.267 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[caf4ae2a-bb3a-44bc-b262-52a530b30604]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 408602, 'reachable_time': 38488, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263221, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.273 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:16:32 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfdc82ee\x2d37dc\x2d4f9b\x2db711\x2dc6c9f87b443a.mount: Deactivated successfully.
Dec 13 04:16:32 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:32.274 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[8b6e9e3c-3f4e-4534-96bf-30ca84f9da65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:32 compute-0 ceph-mon[75071]: pgmap v1237: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 1.4 MiB/s wr, 420 op/s
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.353 243708 INFO nova.virt.libvirt.driver [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Deleting instance files /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3_del
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.354 243708 INFO nova.virt.libvirt.driver [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Deletion of /var/lib/nova/instances/2aaef3c8-05f3-441e-b2ac-969ccd8305e3_del complete
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.368 243708 DEBUG nova.compute.manager [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-unplugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.368 243708 DEBUG oslo_concurrency.lockutils [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.369 243708 DEBUG oslo_concurrency.lockutils [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.369 243708 DEBUG oslo_concurrency.lockutils [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.369 243708 DEBUG nova.compute.manager [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] No waiting events found dispatching network-vif-unplugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.369 243708 DEBUG nova.compute.manager [req-cbfd75ef-097e-4772-a071-8cdf7142407c req-8ee075c7-7f4e-426b-b4d7-b2995523ec40 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-unplugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.418 243708 INFO nova.compute.manager [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Took 0.80 seconds to destroy the instance on the hypervisor.
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.419 243708 DEBUG oslo.service.loopingcall [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.419 243708 DEBUG nova.compute.manager [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.419 243708 DEBUG nova.network.neutron [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:16:32 compute-0 nova_compute[243704]: 2025-12-13 04:16:32.537 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1238: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 320 op/s
Dec 13 04:16:33 compute-0 nova_compute[243704]: 2025-12-13 04:16:33.792 243708 DEBUG nova.network.neutron [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:33 compute-0 nova_compute[243704]: 2025-12-13 04:16:33.815 243708 INFO nova.compute.manager [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Took 1.40 seconds to deallocate network for instance.
Dec 13 04:16:33 compute-0 nova_compute[243704]: 2025-12-13 04:16:33.862 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:33 compute-0 nova_compute[243704]: 2025-12-13 04:16:33.862 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.074 243708 DEBUG oslo_concurrency.processutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.446 243708 DEBUG nova.compute.manager [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.447 243708 DEBUG oslo_concurrency.lockutils [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.447 243708 DEBUG oslo_concurrency.lockutils [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.447 243708 DEBUG oslo_concurrency.lockutils [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.448 243708 DEBUG nova.compute.manager [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] No waiting events found dispatching network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.448 243708 WARNING nova.compute.manager [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received unexpected event network-vif-plugged-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 for instance with vm_state deleted and task_state None.
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.448 243708 DEBUG nova.compute.manager [req-66399569-713d-4608-8ba7-a38a94762c1d req-a94aaf50-2914-4103-b1b6-eed8d2cc3210 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Received event network-vif-deleted-f9a6c5d4-c4f9-4ed2-bfa8-0c3861cf78c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623787519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:34 compute-0 ceph-mon[75071]: pgmap v1238: 305 pgs: 305 active+clean; 261 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.1 MiB/s wr, 320 op/s
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.771 243708 DEBUG oslo_concurrency.processutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.778 243708 DEBUG nova.compute.provider_tree [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:16:34 compute-0 nova_compute[243704]: 2025-12-13 04:16:34.921 243708 DEBUG nova.scheduler.client.report [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:16:35 compute-0 nova_compute[243704]: 2025-12-13 04:16:35.002 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:35 compute-0 nova_compute[243704]: 2025-12-13 04:16:35.047 243708 INFO nova.scheduler.client.report [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Deleted allocations for instance 2aaef3c8-05f3-441e-b2ac-969ccd8305e3
Dec 13 04:16:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:35.091 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:35.091 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:35.092 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:35 compute-0 nova_compute[243704]: 2025-12-13 04:16:35.352 243708 DEBUG oslo_concurrency.lockutils [None req-6e3fac60-fbd1-459c-b87c-0e45434e298e 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "2aaef3c8-05f3-441e-b2ac-969ccd8305e3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1239: 305 pgs: 305 active+clean; 185 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 450 KiB/s rd, 107 KiB/s wr, 282 op/s
Dec 13 04:16:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Dec 13 04:16:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1623787519' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Dec 13 04:16:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Dec 13 04:16:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Dec 13 04:16:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Dec 13 04:16:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Dec 13 04:16:36 compute-0 ceph-mon[75071]: pgmap v1239: 305 pgs: 305 active+clean; 185 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 450 KiB/s rd, 107 KiB/s wr, 282 op/s
Dec 13 04:16:36 compute-0 ceph-mon[75071]: osdmap e263: 3 total, 3 up, 3 in
Dec 13 04:16:36 compute-0 ceph-mon[75071]: osdmap e264: 3 total, 3 up, 3 in
Dec 13 04:16:36 compute-0 nova_compute[243704]: 2025-12-13 04:16:36.894 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 185 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 102 KiB/s wr, 176 op/s
Dec 13 04:16:37 compute-0 nova_compute[243704]: 2025-12-13 04:16:37.539 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Dec 13 04:16:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Dec 13 04:16:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Dec 13 04:16:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349731432' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1349731432' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:38 compute-0 sudo[263244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:16:38 compute-0 sudo[263244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:38 compute-0 sudo[263244]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:38 compute-0 sudo[263275]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:16:38 compute-0 podman[263268]: 2025-12-13 04:16:38.469386245 +0000 UTC m=+0.060232559 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 04:16:38 compute-0 sudo[263275]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:38 compute-0 ceph-mon[75071]: pgmap v1242: 305 pgs: 305 active+clean; 185 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 186 KiB/s rd, 102 KiB/s wr, 176 op/s
Dec 13 04:16:38 compute-0 ceph-mon[75071]: osdmap e265: 3 total, 3 up, 3 in
Dec 13 04:16:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1349731432' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1349731432' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:39 compute-0 sudo[263275]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:16:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:39 compute-0 sudo[263346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:16:39 compute-0 sudo[263346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:39 compute-0 sudo[263346]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:39 compute-0 sudo[263371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:16:39 compute-0 sudo[263371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 135 KiB/s wr, 313 op/s
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.530439474 +0000 UTC m=+0.056446977 container create ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:16:39 compute-0 systemd[1]: Started libpod-conmon-ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd.scope.
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.502617582 +0000 UTC m=+0.028625125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:39 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.632617358 +0000 UTC m=+0.158624851 container init ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.642585997 +0000 UTC m=+0.168593470 container start ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.646553344 +0000 UTC m=+0.172560857 container attach ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:16:39 compute-0 charming_bose[263425]: 167 167
Dec 13 04:16:39 compute-0 systemd[1]: libpod-ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd.scope: Deactivated successfully.
Dec 13 04:16:39 compute-0 conmon[263425]: conmon ec03e909aca25e9577b6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd.scope/container/memory.events
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.651315883 +0000 UTC m=+0.177323396 container died ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:16:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-69ee558ef68add2afb93df532ddf85b0e7721b489711c1fa4739fcb34ce2b81e-merged.mount: Deactivated successfully.
Dec 13 04:16:39 compute-0 podman[263409]: 2025-12-13 04:16:39.690647457 +0000 UTC m=+0.216654940 container remove ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_bose, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:16:39 compute-0 systemd[1]: libpod-conmon-ec03e909aca25e9577b65bae2304fb63f28ed40db712a04fe4bee855007014bd.scope: Deactivated successfully.
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:16:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:16:39 compute-0 podman[263447]: 2025-12-13 04:16:39.937086373 +0000 UTC m=+0.075541495 container create e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:39 compute-0 systemd[1]: Started libpod-conmon-e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3.scope.
Dec 13 04:16:39 compute-0 podman[263447]: 2025-12-13 04:16:39.904367638 +0000 UTC m=+0.042822840 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:40 compute-0 podman[263447]: 2025-12-13 04:16:40.032328519 +0000 UTC m=+0.170783651 container init e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:16:40 compute-0 podman[263447]: 2025-12-13 04:16:40.039429211 +0000 UTC m=+0.177884313 container start e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:16:40 compute-0 podman[263447]: 2025-12-13 04:16:40.043197553 +0000 UTC m=+0.181652665 container attach e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:16:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Dec 13 04:16:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Dec 13 04:16:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Dec 13 04:16:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868754198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1868754198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:16:40
Dec 13 04:16:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:16:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:16:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['backups', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root']
Dec 13 04:16:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:16:40 compute-0 crazy_jennings[263464]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:16:40 compute-0 crazy_jennings[263464]: --> All data devices are unavailable
Dec 13 04:16:40 compute-0 systemd[1]: libpod-e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3.scope: Deactivated successfully.
Dec 13 04:16:40 compute-0 podman[263447]: 2025-12-13 04:16:40.671578678 +0000 UTC m=+0.810033800 container died e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-06f163c46fc42862f1a9da7c02b9b93e3b71c484a0ca3876dd175ac3d6667179-merged.mount: Deactivated successfully.
Dec 13 04:16:40 compute-0 podman[263447]: 2025-12-13 04:16:40.716569755 +0000 UTC m=+0.855024867 container remove e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_jennings, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:16:40 compute-0 systemd[1]: libpod-conmon-e71423e8823fa87a5738a142f26f1b481f978f6b1c2009710ada4620fbe346a3.scope: Deactivated successfully.
Dec 13 04:16:40 compute-0 sudo[263371]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:40 compute-0 ceph-mon[75071]: pgmap v1244: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 294 KiB/s rd, 135 KiB/s wr, 313 op/s
Dec 13 04:16:40 compute-0 ceph-mon[75071]: osdmap e266: 3 total, 3 up, 3 in
Dec 13 04:16:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1868754198' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1868754198' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:40 compute-0 sudo[263495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:16:40 compute-0 sudo[263495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:40 compute-0 sudo[263495]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:40 compute-0 sudo[263520]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:16:40 compute-0 sudo[263520]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.149159526 +0000 UTC m=+0.042153651 container create 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:41 compute-0 systemd[1]: Started libpod-conmon-562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3.scope.
Dec 13 04:16:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e266 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Dec 13 04:16:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Dec 13 04:16:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Dec 13 04:16:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.132318531 +0000 UTC m=+0.025312686 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.228397879 +0000 UTC m=+0.121392004 container init 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.234136764 +0000 UTC m=+0.127130889 container start 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.237606618 +0000 UTC m=+0.130600743 container attach 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:41 compute-0 gifted_kalam[263572]: 167 167
Dec 13 04:16:41 compute-0 systemd[1]: libpod-562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3.scope: Deactivated successfully.
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.239413497 +0000 UTC m=+0.132407622 container died 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c906fa11339a8228ea3a76d292e941cdb3da01c933f6383b7d7f2c29f7e689cf-merged.mount: Deactivated successfully.
Dec 13 04:16:41 compute-0 podman[263556]: 2025-12-13 04:16:41.278464233 +0000 UTC m=+0.171458348 container remove 562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:16:41 compute-0 systemd[1]: libpod-conmon-562f013d8963bb0d4d7090e9c28d2c4c6d041e150cca1863d695fe5d9bfa67f3.scope: Deactivated successfully.
Dec 13 04:16:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409434410' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/409434410' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:41 compute-0 podman[263597]: 2025-12-13 04:16:41.460644241 +0000 UTC m=+0.039702715 container create e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:41 compute-0 systemd[1]: Started libpod-conmon-e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae.scope.
Dec 13 04:16:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 32 KiB/s wr, 143 op/s
Dec 13 04:16:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d475ace8612ba4e980df49cd6831c43bf22acd01ce24f026848c27fd03afaa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d475ace8612ba4e980df49cd6831c43bf22acd01ce24f026848c27fd03afaa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d475ace8612ba4e980df49cd6831c43bf22acd01ce24f026848c27fd03afaa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d475ace8612ba4e980df49cd6831c43bf22acd01ce24f026848c27fd03afaa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:41 compute-0 podman[263597]: 2025-12-13 04:16:41.442384538 +0000 UTC m=+0.021443052 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:41 compute-0 podman[263597]: 2025-12-13 04:16:41.550382958 +0000 UTC m=+0.129441452 container init e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:16:41 compute-0 podman[263597]: 2025-12-13 04:16:41.558184009 +0000 UTC m=+0.137242483 container start e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:16:41 compute-0 podman[263597]: 2025-12-13 04:16:41.561855828 +0000 UTC m=+0.140914442 container attach e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]: {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     "0": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "devices": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "/dev/loop3"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             ],
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_name": "ceph_lv0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_size": "21470642176",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "name": "ceph_lv0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "tags": {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_name": "ceph",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.crush_device_class": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.encrypted": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.objectstore": "bluestore",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_id": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.vdo": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.with_tpm": "0"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             },
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "vg_name": "ceph_vg0"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         }
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     ],
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     "1": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "devices": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "/dev/loop4"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             ],
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_name": "ceph_lv1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_size": "21470642176",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "name": "ceph_lv1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "tags": {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_name": "ceph",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.crush_device_class": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.encrypted": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.objectstore": "bluestore",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_id": "1",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.vdo": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.with_tpm": "0"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             },
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "vg_name": "ceph_vg1"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         }
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     ],
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     "2": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "devices": [
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "/dev/loop5"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             ],
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_name": "ceph_lv2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_size": "21470642176",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "name": "ceph_lv2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "tags": {
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.cluster_name": "ceph",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.crush_device_class": "",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.encrypted": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.objectstore": "bluestore",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osd_id": "2",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.vdo": "0",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:                 "ceph.with_tpm": "0"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             },
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "type": "block",
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:             "vg_name": "ceph_vg2"
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:         }
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]:     ]
Dec 13 04:16:41 compute-0 flamboyant_mcclintock[263614]: }
Dec 13 04:16:41 compute-0 systemd[1]: libpod-e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae.scope: Deactivated successfully.
Dec 13 04:16:41 compute-0 nova_compute[243704]: 2025-12-13 04:16:41.898 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:41 compute-0 podman[263623]: 2025-12-13 04:16:41.913176001 +0000 UTC m=+0.027445053 container died e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:16:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d475ace8612ba4e980df49cd6831c43bf22acd01ce24f026848c27fd03afaa3-merged.mount: Deactivated successfully.
Dec 13 04:16:41 compute-0 podman[263623]: 2025-12-13 04:16:41.952028661 +0000 UTC m=+0.066297733 container remove e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_mcclintock, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:41 compute-0 systemd[1]: libpod-conmon-e0c4bb1f0e136fa0984d92b67f91e8e05035c97f4d24ff6f438142c2973c1bae.scope: Deactivated successfully.
Dec 13 04:16:42 compute-0 sudo[263520]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:42 compute-0 sudo[263636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:16:42 compute-0 sudo[263636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:42 compute-0 sudo[263636]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:42 compute-0 sudo[263661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:16:42 compute-0 sudo[263661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:16:42 compute-0 podman[263699]: 2025-12-13 04:16:42.4108088 +0000 UTC m=+0.029238191 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:42 compute-0 nova_compute[243704]: 2025-12-13 04:16:42.605 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:16:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:16:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1248: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 28 KiB/s wr, 127 op/s
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.142294952 +0000 UTC m=+1.760724293 container create fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:16:44 compute-0 ceph-mon[75071]: osdmap e267: 3 total, 3 up, 3 in
Dec 13 04:16:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/409434410' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/409434410' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:44 compute-0 ceph-mon[75071]: pgmap v1247: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 110 KiB/s rd, 32 KiB/s wr, 143 op/s
Dec 13 04:16:44 compute-0 systemd[1]: Started libpod-conmon-fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af.scope.
Dec 13 04:16:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.240022445 +0000 UTC m=+1.858451816 container init fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.247373394 +0000 UTC m=+1.865802775 container start fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:16:44 compute-0 youthful_pike[263716]: 167 167
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.25239976 +0000 UTC m=+1.870829141 container attach fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:16:44 compute-0 systemd[1]: libpod-fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af.scope: Deactivated successfully.
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.253940681 +0000 UTC m=+1.872370062 container died fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:16:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f84a5b2174a44b807de361c059828ddbbd51de826665fba8547c587b06217f-merged.mount: Deactivated successfully.
Dec 13 04:16:44 compute-0 podman[263699]: 2025-12-13 04:16:44.293604214 +0000 UTC m=+1.912033555 container remove fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_pike, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:16:44 compute-0 systemd[1]: libpod-conmon-fe9bb83da027c443adfb0a3d99ac2eb8653101ccd572b4a67af645e2b621a1af.scope: Deactivated successfully.
Dec 13 04:16:44 compute-0 podman[263738]: 2025-12-13 04:16:44.489593095 +0000 UTC m=+0.049918921 container create 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True)
Dec 13 04:16:44 compute-0 systemd[1]: Started libpod-conmon-9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01.scope.
Dec 13 04:16:44 compute-0 podman[263738]: 2025-12-13 04:16:44.46829626 +0000 UTC m=+0.028622006 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:16:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143f734901a0e17f318181e183faf6c6f038e03435c79ce9230d04f415118a53/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143f734901a0e17f318181e183faf6c6f038e03435c79ce9230d04f415118a53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143f734901a0e17f318181e183faf6c6f038e03435c79ce9230d04f415118a53/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/143f734901a0e17f318181e183faf6c6f038e03435c79ce9230d04f415118a53/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:16:44 compute-0 podman[263738]: 2025-12-13 04:16:44.595481179 +0000 UTC m=+0.155806925 container init 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:16:44 compute-0 podman[263738]: 2025-12-13 04:16:44.610499005 +0000 UTC m=+0.170824721 container start 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:16:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170887923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170887923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:44 compute-0 podman[263738]: 2025-12-13 04:16:44.761305834 +0000 UTC m=+0.321631550 container attach 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:16:45 compute-0 ceph-mon[75071]: pgmap v1248: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 98 KiB/s rd, 28 KiB/s wr, 127 op/s
Dec 13 04:16:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3170887923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3170887923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:45 compute-0 lvm[263834]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:16:45 compute-0 lvm[263834]: VG ceph_vg1 finished
Dec 13 04:16:45 compute-0 lvm[263837]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:16:45 compute-0 lvm[263837]: VG ceph_vg2 finished
Dec 13 04:16:45 compute-0 lvm[263833]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:16:45 compute-0 lvm[263833]: VG ceph_vg0 finished
Dec 13 04:16:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1249: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 25 KiB/s wr, 215 op/s
Dec 13 04:16:45 compute-0 keen_wu[263755]: {}
Dec 13 04:16:45 compute-0 systemd[1]: libpod-9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01.scope: Deactivated successfully.
Dec 13 04:16:45 compute-0 systemd[1]: libpod-9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01.scope: Consumed 1.556s CPU time.
Dec 13 04:16:45 compute-0 podman[263738]: 2025-12-13 04:16:45.568319582 +0000 UTC m=+1.128645298 container died 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-143f734901a0e17f318181e183faf6c6f038e03435c79ce9230d04f415118a53-merged.mount: Deactivated successfully.
Dec 13 04:16:45 compute-0 podman[263738]: 2025-12-13 04:16:45.612277071 +0000 UTC m=+1.172602807 container remove 9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_wu, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:45 compute-0 systemd[1]: libpod-conmon-9b3f348a8494b6fe5b5b658777e79fb311f2e64f214b001ad2648b5279c4ed01.scope: Deactivated successfully.
Dec 13 04:16:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1552112805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1552112805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:45 compute-0 sudo[263661]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:16:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:16:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:45 compute-0 sudo[263851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:16:45 compute-0 sudo[263851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:16:45 compute-0 sudo[263851]: pam_unix(sudo:session): session closed for user root
Dec 13 04:16:46 compute-0 ceph-mon[75071]: pgmap v1249: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 25 KiB/s wr, 215 op/s
Dec 13 04:16:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1552112805' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1552112805' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:16:46 compute-0 podman[263876]: 2025-12-13 04:16:46.191302933 +0000 UTC m=+0.077040836 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:16:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Dec 13 04:16:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Dec 13 04:16:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.831 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.832 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.832 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.832 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.832 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.834 243708 INFO nova.compute.manager [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Terminating instance
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.835 243708 DEBUG nova.compute.manager [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.867 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599391.8655064, 2aaef3c8-05f3-441e-b2ac-969ccd8305e3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.867 243708 INFO nova.compute.manager [-] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] VM Stopped (Lifecycle Event)
Dec 13 04:16:46 compute-0 kernel: tapa473f08f-b0 (unregistering): left promiscuous mode
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.887 243708 DEBUG nova.compute.manager [None req-bf0720ae-4e57-47ee-a730-2dc89f9b42b5 - - - - - -] [instance: 2aaef3c8-05f3-441e-b2ac-969ccd8305e3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:16:46 compute-0 NetworkManager[48899]: <info>  [1765599406.8943] device (tapa473f08f-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:16:46 compute-0 ovn_controller[145204]: 2025-12-13T04:16:46Z|00147|binding|INFO|Releasing lport a473f08f-b0da-4f57-b165-cda72ecc69a8 from this chassis (sb_readonly=0)
Dec 13 04:16:46 compute-0 ovn_controller[145204]: 2025-12-13T04:16:46Z|00148|binding|INFO|Setting lport a473f08f-b0da-4f57-b165-cda72ecc69a8 down in Southbound
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.898 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:46 compute-0 ovn_controller[145204]: 2025-12-13T04:16:46Z|00149|binding|INFO|Removing iface tapa473f08f-b0 ovn-installed in OVS
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.901 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:46.913 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:85:04:82 10.100.0.6'], port_security=['fa:16:3e:85:04:82 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '83d8a96f-501e-4c0b-aed8-2099abf55b94', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8d81fb53-f8e9-4a14-8f6f-c86adc369008', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=a473f08f-b0da-4f57-b165-cda72ecc69a8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:16:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:46.914 154842 INFO neutron.agent.ovn.metadata.agent [-] Port a473f08f-b0da-4f57-b165-cda72ecc69a8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:16:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:46.916 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:16:46 compute-0 nova_compute[243704]: 2025-12-13 04:16:46.915 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:46.946 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[048be5d5-e089-4305-8347-ab896cb239f8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:46 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Dec 13 04:16:46 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000e.scope: Consumed 16.313s CPU time.
Dec 13 04:16:46 compute-0 systemd-machined[206767]: Machine qemu-14-instance-0000000e terminated.
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:46.999 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[3e8fdee3-b595-4cf8-ad43-fcab580ee615]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.003 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[138b1bd7-b1e5-4c9b-8b2c-816b88230106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.040 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[bc215ad4-f25b-4c1b-8d4d-38011c398503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.062 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d16dc43e-e8f1-483e-a42e-f5d46779a779]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 700, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405774, 'reachable_time': 33450, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263907, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.065 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.071 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.082 243708 INFO nova.virt.libvirt.driver [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Instance destroyed successfully.
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.083 243708 DEBUG nova.objects.instance [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 83d8a96f-501e-4c0b-aed8-2099abf55b94 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.089 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[82253afe-c010-459d-ac21-f72e7632f495]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405791, 'tstamp': 405791}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263913, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 405794, 'tstamp': 405794}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 263913, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.090 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.092 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.096 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.096 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.097 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.097 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:47.097 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.102 243708 DEBUG nova.virt.libvirt.vif [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:16:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',display_name='tempest-TestVolumeBootPattern-image-snapshot-server-782771131',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-image-snapshot-server-782771131',id=14,image_ref='651f53de-db0a-4cb7-9fa7-a760e9acfe9e',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHAHwcQsG0mq0B6zDm1P2JmV0qZHUx42rz2Ur40ayveosdP8UqDZ1iSQhU1MtqmgtPFIXr/WR/MzP0cUIuReWE77iI+Uo5KnFVsmoHYk0k6bPkxnfBA0F02V5c2ItG0FIQ==',key_name='tempest-keypair-1178564669',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:16:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-b0uyddx2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_bdm_v2='True',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_project_name='tempest-TestVolumeBootPattern-236547311',image_owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member',image_root_device_name='/dev/vda',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:16:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=83d8a96f-501e-4c0b-aed8-2099abf55b94,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.102 243708 DEBUG nova.network.os_vif_util [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "address": "fa:16:3e:85:04:82", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa473f08f-b0", "ovs_interfaceid": "a473f08f-b0da-4f57-b165-cda72ecc69a8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.103 243708 DEBUG nova.network.os_vif_util [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.104 243708 DEBUG os_vif [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.107 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.107 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa473f08f-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.109 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.111 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.113 243708 INFO os_vif [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:85:04:82,bridge_name='br-int',has_traffic_filtering=True,id=a473f08f-b0da-4f57-b165-cda72ecc69a8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa473f08f-b0')
Dec 13 04:16:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1125115521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.137 243708 DEBUG nova.compute.manager [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-unplugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.137 243708 DEBUG oslo_concurrency.lockutils [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.138 243708 DEBUG oslo_concurrency.lockutils [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.138 243708 DEBUG oslo_concurrency.lockutils [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.138 243708 DEBUG nova.compute.manager [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] No waiting events found dispatching network-vif-unplugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.138 243708 DEBUG nova.compute.manager [req-6d866e59-ad90-442d-a3c1-be1c4cfb483b req-b70f382f-7ee3-432c-9c68-2df510e256e1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-unplugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:16:47 compute-0 ceph-mon[75071]: osdmap e268: 3 total, 3 up, 3 in
Dec 13 04:16:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1125115521' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.261 243708 INFO nova.virt.libvirt.driver [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Deleting instance files /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94_del
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.262 243708 INFO nova.virt.libvirt.driver [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Deletion of /var/lib/nova/instances/83d8a96f-501e-4c0b-aed8-2099abf55b94_del complete
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.332 243708 INFO nova.compute.manager [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Took 0.50 seconds to destroy the instance on the hypervisor.
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.333 243708 DEBUG oslo.service.loopingcall [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.334 243708 DEBUG nova.compute.manager [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.334 243708 DEBUG nova.network.neutron [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:16:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1251: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 KiB/s wr, 121 op/s
Dec 13 04:16:47 compute-0 nova_compute[243704]: 2025-12-13 04:16:47.608 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Dec 13 04:16:48 compute-0 ceph-mon[75071]: pgmap v1251: 305 pgs: 305 active+clean; 185 MiB data, 345 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 3.2 KiB/s wr, 121 op/s
Dec 13 04:16:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Dec 13 04:16:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Dec 13 04:16:48 compute-0 nova_compute[243704]: 2025-12-13 04:16:48.580 243708 DEBUG nova.network.neutron [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:48 compute-0 nova_compute[243704]: 2025-12-13 04:16:48.621 243708 INFO nova.compute.manager [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Took 1.29 seconds to deallocate network for instance.
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.018 243708 INFO nova.compute.manager [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Took 0.40 seconds to detach 1 volumes for instance.
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.019 243708 DEBUG nova.compute.manager [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Deleting volume: 2a588a76-f680-4e66-9293-f7c13af1bbe9 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.207 243708 DEBUG nova.compute.manager [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.208 243708 DEBUG oslo_concurrency.lockutils [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.208 243708 DEBUG oslo_concurrency.lockutils [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.208 243708 DEBUG oslo_concurrency.lockutils [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.209 243708 DEBUG nova.compute.manager [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] No waiting events found dispatching network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.209 243708 WARNING nova.compute.manager [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received unexpected event network-vif-plugged-a473f08f-b0da-4f57-b165-cda72ecc69a8 for instance with vm_state active and task_state deleting.
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.209 243708 DEBUG nova.compute.manager [req-0059b82c-072c-4ec8-900a-35ab98036cdd req-d78c9541-f89a-4a3c-af99-2c1678dcadcb 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Received event network-vif-deleted-a473f08f-b0da-4f57-b165-cda72ecc69a8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.214 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.214 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Dec 13 04:16:49 compute-0 ceph-mon[75071]: osdmap e269: 3 total, 3 up, 3 in
Dec 13 04:16:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Dec 13 04:16:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.306 243708 DEBUG oslo_concurrency.processutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 332 op/s
Dec 13 04:16:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650395180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650395180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/597897460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.884 243708 DEBUG oslo_concurrency.processutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.893 243708 DEBUG nova.compute.provider_tree [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.915 243708 DEBUG nova.scheduler.client.report [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.951 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.737s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:49 compute-0 nova_compute[243704]: 2025-12-13 04:16:49.980 243708 INFO nova.scheduler.client.report [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 83d8a96f-501e-4c0b-aed8-2099abf55b94
Dec 13 04:16:50 compute-0 nova_compute[243704]: 2025-12-13 04:16:50.065 243708 DEBUG oslo_concurrency.lockutils [None req-4d780e61-3c28-4c1c-a036-7d56cf600a87 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "83d8a96f-501e-4c0b-aed8-2099abf55b94" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:50 compute-0 ceph-mon[75071]: osdmap e270: 3 total, 3 up, 3 in
Dec 13 04:16:50 compute-0 ceph-mon[75071]: pgmap v1254: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.1 MiB/s wr, 332 op/s
Dec 13 04:16:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1650395180' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1650395180' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/597897460' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Dec 13 04:16:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Dec 13 04:16:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Dec 13 04:16:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1256: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 8.0 MiB/s wr, 205 op/s
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.735 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.736 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.736 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.737 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.737 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.738 243708 INFO nova.compute.manager [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Terminating instance
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.739 243708 DEBUG nova.compute.manager [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:16:51 compute-0 kernel: tapfda256aa-ac (unregistering): left promiscuous mode
Dec 13 04:16:51 compute-0 NetworkManager[48899]: <info>  [1765599411.8112] device (tapfda256aa-ac): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:16:51 compute-0 ovn_controller[145204]: 2025-12-13T04:16:51Z|00150|binding|INFO|Releasing lport fda256aa-ac14-4ec9-a507-3553417887b8 from this chassis (sb_readonly=0)
Dec 13 04:16:51 compute-0 ovn_controller[145204]: 2025-12-13T04:16:51Z|00151|binding|INFO|Setting lport fda256aa-ac14-4ec9-a507-3553417887b8 down in Southbound
Dec 13 04:16:51 compute-0 ovn_controller[145204]: 2025-12-13T04:16:51Z|00152|binding|INFO|Removing iface tapfda256aa-ac ovn-installed in OVS
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.821 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:51.833 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:1a:6e 10.100.0.7'], port_security=['fa:16:3e:9f:1a:6e 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '49ec6453-af58-4bf0-89f5-4faf5d3a92c5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2b4bcbae-c530-4398-b94f-1e1a32150108', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.236'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=fda256aa-ac14-4ec9-a507-3553417887b8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:16:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:51.834 154842 INFO neutron.agent.ovn.metadata.agent [-] Port fda256aa-ac14-4ec9-a507-3553417887b8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:16:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:51.836 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:16:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:51.837 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cc18dc40-cf42-4c85-8269-227507070bfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:51.838 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace which is not needed anymore
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.847 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:51 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Dec 13 04:16:51 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000c.scope: Consumed 17.439s CPU time.
Dec 13 04:16:51 compute-0 systemd-machined[206767]: Machine qemu-12-instance-0000000c terminated.
Dec 13 04:16:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:16:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2695374446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.967 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.972 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:51 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [NOTICE]   (260560) : haproxy version is 2.8.14-c23fe91
Dec 13 04:16:51 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [NOTICE]   (260560) : path to executable is /usr/sbin/haproxy
Dec 13 04:16:51 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [WARNING]  (260560) : Exiting Master process...
Dec 13 04:16:51 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [ALERT]    (260560) : Current worker (260562) exited with code 143 (Terminated)
Dec 13 04:16:51 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[260556]: [WARNING]  (260560) : All workers exited. Exiting... (0)
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.983 243708 INFO nova.virt.libvirt.driver [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Instance destroyed successfully.
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.984 243708 DEBUG nova.objects.instance [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:16:51 compute-0 systemd[1]: libpod-a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c.scope: Deactivated successfully.
Dec 13 04:16:51 compute-0 podman[263985]: 2025-12-13 04:16:51.991251324 +0000 UTC m=+0.055989335 container died a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.996 243708 DEBUG nova.virt.libvirt.vif [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:15:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',display_name='tempest-TestVolumeBootPattern-volume-backed-server-1783865714',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-volume-backed-server-1783865714',id=12,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKg7NPorcUaCTyjQnIW37TGZvEszn6Z90F3FdW4GpknN1Dc3o5yalSwdp3VZdGKi0dyr27qdTMqXOX1N2njMKGjTxmHz8tCExce0u2AeVtyuttyfXlfvJnKOocQeay/Ncw==',key_name='tempest-keypair-492315221',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:15:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-hli50d0x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:15:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=49ec6453-af58-4bf0-89f5-4faf5d3a92c5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.997 243708 DEBUG nova.network.os_vif_util [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "fda256aa-ac14-4ec9-a507-3553417887b8", "address": "fa:16:3e:9f:1a:6e", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfda256aa-ac", "ovs_interfaceid": "fda256aa-ac14-4ec9-a507-3553417887b8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.998 243708 DEBUG nova.network.os_vif_util [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:16:51 compute-0 nova_compute[243704]: 2025-12-13 04:16:51.998 243708 DEBUG os_vif [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.001 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.001 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfda256aa-ac, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.003 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.009 243708 INFO os_vif [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:1a:6e,bridge_name='br-int',has_traffic_filtering=True,id=fda256aa-ac14-4ec9-a507-3553417887b8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfda256aa-ac')
Dec 13 04:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c-userdata-shm.mount: Deactivated successfully.
Dec 13 04:16:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-45bf05a9c686e4d995e36e589a6db98885179e1ab8c41be0ce3796963ee118ca-merged.mount: Deactivated successfully.
Dec 13 04:16:52 compute-0 podman[263985]: 2025-12-13 04:16:52.038181824 +0000 UTC m=+0.102919835 container cleanup a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:16:52 compute-0 systemd[1]: libpod-conmon-a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c.scope: Deactivated successfully.
Dec 13 04:16:52 compute-0 podman[264036]: 2025-12-13 04:16:52.130069659 +0000 UTC m=+0.058338319 container remove a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.136 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1e09c395-a6bf-48b7-82a0-22a025173c8d]: (4, ('Sat Dec 13 04:16:51 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c)\na357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c\nSat Dec 13 04:16:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (a357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c)\na357bb0088615e2bc7898a7caddb8d747226fa59df42c5ca1aa6869e974c5d8c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.138 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[34f1187e-359c-4b1d-aeda-31ad18b08ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.139 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.142 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:52 compute-0 kernel: tapfc553cd2-50: left promiscuous mode
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.168 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.172 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2c3922-0150-4f5b-8888-9adabc569317]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.196 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bb6c0ad5-2b3d-4ac6-804c-b9761ba1ccce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.198 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dd4baf89-2cc9-4e93-93bb-532b669ddb0f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.217 243708 INFO nova.virt.libvirt.driver [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Deleting instance files /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5_del
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.218 243708 INFO nova.virt.libvirt.driver [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Deletion of /var/lib/nova/instances/49ec6453-af58-4bf0-89f5-4faf5d3a92c5_del complete
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.225 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3318c8bf-2482-4d71-9f18-9cd01a8f38aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 405766, 'reachable_time': 16942, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264054, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc553cd2\x2d5dd5\x2d4d87\x2d97af\x2d4b4eeb4ca0b0.mount: Deactivated successfully.
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.231 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:16:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:52.233 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[046933d3-824a-4531-b4a0-06206601251c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:16:52 compute-0 ceph-mon[75071]: osdmap e271: 3 total, 3 up, 3 in
Dec 13 04:16:52 compute-0 ceph-mon[75071]: pgmap v1256: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 8.0 MiB/s wr, 205 op/s
Dec 13 04:16:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2695374446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.280 243708 INFO nova.compute.manager [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Took 0.54 seconds to destroy the instance on the hypervisor.
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.281 243708 DEBUG oslo.service.loopingcall [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.281 243708 DEBUG nova.compute.manager [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.282 243708 DEBUG nova.network.neutron [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.369 243708 DEBUG nova.compute.manager [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-unplugged-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.370 243708 DEBUG oslo_concurrency.lockutils [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.371 243708 DEBUG oslo_concurrency.lockutils [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.371 243708 DEBUG oslo_concurrency.lockutils [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.371 243708 DEBUG nova.compute.manager [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] No waiting events found dispatching network-vif-unplugged-fda256aa-ac14-4ec9-a507-3553417887b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.371 243708 DEBUG nova.compute.manager [req-01f1a414-9414-4e3b-97ae-22a50157768d req-06c6262b-9660-4d51-84fd-9a7012e366a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-unplugged-fda256aa-ac14-4ec9-a507-3553417887b8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.93393098344683e-06 of space, bias 1.0, pg target 0.001480179295034049 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0015596600569978219 of space, bias 1.0, pg target 0.4678980170993466 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.00034700837569706547 of space, bias 1.0, pg target 0.10410251270911965 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006668594826352218 of space, bias 1.0, pg target 0.20005784479056654 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.31970901325313e-06 of space, bias 4.0, pg target 0.001583650815903756 quantized to 16 (current 16)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:16:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:16:52 compute-0 nova_compute[243704]: 2025-12-13 04:16:52.611 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.134 243708 DEBUG nova.network.neutron [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.151 243708 INFO nova.compute.manager [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Took 0.87 seconds to deallocate network for instance.
Dec 13 04:16:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Dec 13 04:16:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Dec 13 04:16:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.311 243708 INFO nova.compute.manager [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Took 0.16 seconds to detach 1 volumes for instance.
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.313 243708 DEBUG nova.compute.manager [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Deleting volume: cfe26227-c363-4b90-a064-865a294ec0f3 _cleanup_volumes /usr/lib/python3.9/site-packages/nova/compute/manager.py:3217
Dec 13 04:16:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 8.1 MiB/s wr, 206 op/s
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.529 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.530 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:53 compute-0 nova_compute[243704]: 2025-12-13 04:16:53.583 243708 DEBUG oslo_concurrency.processutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:16:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093211100' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1093211100' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:16:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3809891288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.173 243708 DEBUG oslo_concurrency.processutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.590s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.182 243708 DEBUG nova.compute.provider_tree [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.198 243708 DEBUG nova.scheduler.client.report [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.218 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.253 243708 INFO nova.scheduler.client.report [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 49ec6453-af58-4bf0-89f5-4faf5d3a92c5
Dec 13 04:16:54 compute-0 ceph-mon[75071]: osdmap e272: 3 total, 3 up, 3 in
Dec 13 04:16:54 compute-0 ceph-mon[75071]: pgmap v1258: 305 pgs: 305 active+clean; 277 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 8.1 MiB/s wr, 206 op/s
Dec 13 04:16:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1093211100' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1093211100' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3809891288' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.323 243708 DEBUG oslo_concurrency.lockutils [None req-b0989b5b-f01f-40e3-b29d-5e41f1a40380 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.438 243708 DEBUG nova.compute.manager [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.439 243708 DEBUG oslo_concurrency.lockutils [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.439 243708 DEBUG oslo_concurrency.lockutils [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.439 243708 DEBUG oslo_concurrency.lockutils [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "49ec6453-af58-4bf0-89f5-4faf5d3a92c5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.439 243708 DEBUG nova.compute.manager [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] No waiting events found dispatching network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.440 243708 WARNING nova.compute.manager [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received unexpected event network-vif-plugged-fda256aa-ac14-4ec9-a507-3553417887b8 for instance with vm_state deleted and task_state None.
Dec 13 04:16:54 compute-0 nova_compute[243704]: 2025-12-13 04:16:54.440 243708 DEBUG nova.compute.manager [req-17c4a47b-3447-4614-be0e-dd08385f15e4 req-067cca6f-9ff9-4fe0-a0b9-320685ed885d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Received event network-vif-deleted-fda256aa-ac14-4ec9-a507-3553417887b8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:16:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Dec 13 04:16:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Dec 13 04:16:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Dec 13 04:16:55 compute-0 nova_compute[243704]: 2025-12-13 04:16:55.401 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:55.400 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:16:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:55.401 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:16:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1260: 305 pgs: 305 active+clean; 252 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Dec 13 04:16:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3675679833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3675679833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e273 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:16:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Dec 13 04:16:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Dec 13 04:16:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Dec 13 04:16:56 compute-0 ceph-mon[75071]: osdmap e273: 3 total, 3 up, 3 in
Dec 13 04:16:56 compute-0 ceph-mon[75071]: pgmap v1260: 305 pgs: 305 active+clean; 252 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.6 MiB/s wr, 206 op/s
Dec 13 04:16:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3675679833' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3675679833' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:57 compute-0 nova_compute[243704]: 2025-12-13 04:16:57.004 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Dec 13 04:16:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Dec 13 04:16:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Dec 13 04:16:57 compute-0 ceph-mon[75071]: osdmap e274: 3 total, 3 up, 3 in
Dec 13 04:16:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:16:57.403 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:16:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 252 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.0 MiB/s wr, 292 op/s
Dec 13 04:16:57 compute-0 nova_compute[243704]: 2025-12-13 04:16:57.614 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Dec 13 04:16:58 compute-0 ceph-mon[75071]: osdmap e275: 3 total, 3 up, 3 in
Dec 13 04:16:58 compute-0 ceph-mon[75071]: pgmap v1263: 305 pgs: 305 active+clean; 252 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.0 MiB/s wr, 292 op/s
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Dec 13 04:16:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284800586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284800586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:16:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1175515682' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:16:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1175515682' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:58 compute-0 podman[264077]: 2025-12-13 04:16:58.948143551 +0000 UTC m=+0.087122339 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 04:16:59 compute-0 ceph-mon[75071]: osdmap e276: 3 total, 3 up, 3 in
Dec 13 04:16:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2284800586' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2284800586' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1175515682' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:16:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1175515682' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:16:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 15 KiB/s wr, 350 op/s
Dec 13 04:17:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Dec 13 04:17:00 compute-0 ceph-mon[75071]: pgmap v1265: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 251 KiB/s rd, 15 KiB/s wr, 350 op/s
Dec 13 04:17:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Dec 13 04:17:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Dec 13 04:17:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Dec 13 04:17:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1267: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 12 KiB/s wr, 282 op/s
Dec 13 04:17:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Dec 13 04:17:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Dec 13 04:17:01 compute-0 ceph-mon[75071]: osdmap e277: 3 total, 3 up, 3 in
Dec 13 04:17:02 compute-0 nova_compute[243704]: 2025-12-13 04:17:02.008 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:02 compute-0 nova_compute[243704]: 2025-12-13 04:17:02.079 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599407.0780168, 83d8a96f-501e-4c0b-aed8-2099abf55b94 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:02 compute-0 nova_compute[243704]: 2025-12-13 04:17:02.080 243708 INFO nova.compute.manager [-] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] VM Stopped (Lifecycle Event)
Dec 13 04:17:02 compute-0 nova_compute[243704]: 2025-12-13 04:17:02.099 243708 DEBUG nova.compute.manager [None req-116d53d0-87b1-45b0-8af2-d35a9a9396ed - - - - - -] [instance: 83d8a96f-501e-4c0b-aed8-2099abf55b94] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3836632928' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:02 compute-0 nova_compute[243704]: 2025-12-13 04:17:02.616 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:03 compute-0 ceph-mon[75071]: pgmap v1267: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 202 KiB/s rd, 12 KiB/s wr, 282 op/s
Dec 13 04:17:03 compute-0 ceph-mon[75071]: osdmap e278: 3 total, 3 up, 3 in
Dec 13 04:17:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3836632928' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 11 KiB/s wr, 242 op/s
Dec 13 04:17:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Dec 13 04:17:04 compute-0 ceph-mon[75071]: pgmap v1269: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 101 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 174 KiB/s rd, 11 KiB/s wr, 242 op/s
Dec 13 04:17:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Dec 13 04:17:04 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Dec 13 04:17:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 KiB/s wr, 74 op/s
Dec 13 04:17:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Dec 13 04:17:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Dec 13 04:17:05 compute-0 ceph-mon[75071]: osdmap e279: 3 total, 3 up, 3 in
Dec 13 04:17:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Dec 13 04:17:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256294429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/256294429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e280 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Dec 13 04:17:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Dec 13 04:17:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Dec 13 04:17:06 compute-0 ceph-mon[75071]: pgmap v1271: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 2.5 KiB/s wr, 74 op/s
Dec 13 04:17:06 compute-0 ceph-mon[75071]: osdmap e280: 3 total, 3 up, 3 in
Dec 13 04:17:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/256294429' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/256294429' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:06 compute-0 ceph-mon[75071]: osdmap e281: 3 total, 3 up, 3 in
Dec 13 04:17:06 compute-0 nova_compute[243704]: 2025-12-13 04:17:06.981 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599411.979802, 49ec6453-af58-4bf0-89f5-4faf5d3a92c5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:06 compute-0 nova_compute[243704]: 2025-12-13 04:17:06.982 243708 INFO nova.compute.manager [-] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] VM Stopped (Lifecycle Event)
Dec 13 04:17:07 compute-0 nova_compute[243704]: 2025-12-13 04:17:07.000 243708 DEBUG nova.compute.manager [None req-0bc14349-1f56-47d7-b59d-a7f51d557685 - - - - - -] [instance: 49ec6453-af58-4bf0-89f5-4faf5d3a92c5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:07 compute-0 nova_compute[243704]: 2025-12-13 04:17:07.011 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 KiB/s wr, 78 op/s
Dec 13 04:17:07 compute-0 nova_compute[243704]: 2025-12-13 04:17:07.619 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:08 compute-0 ceph-mon[75071]: pgmap v1274: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.7 KiB/s wr, 78 op/s
Dec 13 04:17:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2354102232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:08 compute-0 podman[264104]: 2025-12-13 04:17:08.90439059 +0000 UTC m=+0.051474114 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:17:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 110 op/s
Dec 13 04:17:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Dec 13 04:17:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2354102232' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Dec 13 04:17:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Dec 13 04:17:10 compute-0 ceph-mon[75071]: pgmap v1275: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.5 KiB/s wr, 110 op/s
Dec 13 04:17:10 compute-0 ceph-mon[75071]: osdmap e282: 3 total, 3 up, 3 in
Dec 13 04:17:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.0 KiB/s wr, 36 op/s
Dec 13 04:17:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Dec 13 04:17:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Dec 13 04:17:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Dec 13 04:17:11 compute-0 nova_compute[243704]: 2025-12-13 04:17:11.901 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613087362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613087362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:12 compute-0 nova_compute[243704]: 2025-12-13 04:17:12.058 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:12 compute-0 nova_compute[243704]: 2025-12-13 04:17:12.622 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Dec 13 04:17:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Dec 13 04:17:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Dec 13 04:17:12 compute-0 ceph-mon[75071]: pgmap v1277: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.0 KiB/s wr, 36 op/s
Dec 13 04:17:12 compute-0 ceph-mon[75071]: osdmap e283: 3 total, 3 up, 3 in
Dec 13 04:17:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2613087362' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2613087362' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:12 compute-0 nova_compute[243704]: 2025-12-13 04:17:12.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:12 compute-0 nova_compute[243704]: 2025-12-13 04:17:12.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:17:12 compute-0 nova_compute[243704]: 2025-12-13 04:17:12.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:17:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.0 KiB/s wr, 36 op/s
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.666 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.666 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.680 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:17:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Dec 13 04:17:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Dec 13 04:17:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Dec 13 04:17:13 compute-0 ceph-mon[75071]: osdmap e284: 3 total, 3 up, 3 in
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.860 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.861 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.872 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.873 243708 INFO nova.compute.claims [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.962 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.963 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:13 compute-0 nova_compute[243704]: 2025-12-13 04:17:13.983 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810185421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1810185421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.091 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1951678200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.654 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.563s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.662 243708 DEBUG nova.compute.provider_tree [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.684 243708 DEBUG nova.scheduler.client.report [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.706 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.707 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:17:14 compute-0 ceph-mon[75071]: pgmap v1280: 305 pgs: 305 active+clean; 88 MiB data, 292 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 3.0 KiB/s wr, 36 op/s
Dec 13 04:17:14 compute-0 ceph-mon[75071]: osdmap e285: 3 total, 3 up, 3 in
Dec 13 04:17:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1810185421' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1810185421' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1951678200' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.712 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.712 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.712 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.713 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.786 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.787 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.807 243708 INFO nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.837 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:17:14 compute-0 nova_compute[243704]: 2025-12-13 04:17:14.889 243708 INFO nova.virt.block_device [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Booting with volume 2537ca78-db6c-4c72-bb72-81e5382d8879 at /dev/vda
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.005 243708 DEBUG nova.policy [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:17:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1262058017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.080 243708 DEBUG os_brick.utils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.086 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.098 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.099 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[0d8665ab-ebd1-48a8-8e93-78f03bea3353]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.101 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.108 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.108 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[15942b8b-c8df-4cba-9f27-056fbe75ce44]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.111 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.121 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.122 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[81a99b0d-daa6-4d95-a1f0-381a48c6e6c6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.124 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1ae4b75e-8d93-47b6-bfab-dad2cad5bd04]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.125 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.162 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.165 243708 DEBUG os_brick.initiator.connectors.lightos [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.166 243708 DEBUG os_brick.initiator.connectors.lightos [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.166 243708 DEBUG os_brick.initiator.connectors.lightos [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.167 243708 DEBUG os_brick.utils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.167 243708 DEBUG nova.virt.block_device [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updating existing volume attachment record: f0a4d773-20f5-4e6b-9a6b-d0cc567dc0ec _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:17:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2563709915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.274 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.454 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.456 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4460MB free_disk=59.98815199173987GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.456 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.457 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 3.7 MiB/s wr, 214 op/s
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.530 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 60ab3a2a-719a-47b1-b774-e518b4039ca5 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.530 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.531 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:17:15 compute-0 nova_compute[243704]: 2025-12-13 04:17:15.579 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Dec 13 04:17:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1262058017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2563709915' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Dec 13 04:17:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Dec 13 04:17:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1205843462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2339417857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.237 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.243 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.259 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.282 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.282 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.826s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.285 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.286 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.328 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.330 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.331 243708 INFO nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Creating image(s)
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.331 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.331 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Ensure instance console log exists: /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.332 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.332 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.332 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e286 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Dec 13 04:17:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Dec 13 04:17:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Dec 13 04:17:16 compute-0 nova_compute[243704]: 2025-12-13 04:17:16.553 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Successfully created port: 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:17:16 compute-0 ceph-mon[75071]: pgmap v1282: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 3.7 MiB/s wr, 214 op/s
Dec 13 04:17:16 compute-0 ceph-mon[75071]: osdmap e286: 3 total, 3 up, 3 in
Dec 13 04:17:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1205843462' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2339417857' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:16 compute-0 ceph-mon[75071]: osdmap e287: 3 total, 3 up, 3 in
Dec 13 04:17:16 compute-0 podman[264197]: 2025-12-13 04:17:16.958557184 +0000 UTC m=+0.090748076 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.061 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1285: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 4.4 MiB/s wr, 257 op/s
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.625 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.634 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Successfully updated port: 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.652 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.653 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.653 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.731 243708 DEBUG nova.compute.manager [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-changed-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.732 243708 DEBUG nova.compute.manager [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Refreshing instance network info cache due to event network-changed-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.732 243708 DEBUG oslo_concurrency.lockutils [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:17 compute-0 nova_compute[243704]: 2025-12-13 04:17:17.783 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.219 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.220 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1669529614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.804 243708 DEBUG nova.network.neutron [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updating instance_info_cache with network_info: [{"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.828 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.828 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Instance network_info: |[{"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:17:18 compute-0 ceph-mon[75071]: pgmap v1285: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 4.4 MiB/s wr, 257 op/s
Dec 13 04:17:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1669529614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.829 243708 DEBUG oslo_concurrency.lockutils [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.830 243708 DEBUG nova.network.neutron [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Refreshing network info cache for port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.835 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Start _get_guest_xml network_info=[{"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2537ca78-db6c-4c72-bb72-81e5382d8879', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2537ca78-db6c-4c72-bb72-81e5382d8879', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '60ab3a2a-719a-47b1-b774-e518b4039ca5', 'attached_at': '', 'detached_at': '', 'volume_id': '2537ca78-db6c-4c72-bb72-81e5382d8879', 'serial': '2537ca78-db6c-4c72-bb72-81e5382d8879'}, 'disk_bus': 'virtio', 'attachment_id': 'f0a4d773-20f5-4e6b-9a6b-d0cc567dc0ec', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:17:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Dec 13 04:17:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.840 243708 WARNING nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.846 243708 DEBUG nova.virt.libvirt.host [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.847 243708 DEBUG nova.virt.libvirt.host [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.854 243708 DEBUG nova.virt.libvirt.host [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.855 243708 DEBUG nova.virt.libvirt.host [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.856 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.856 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.857 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.857 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.858 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.858 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.858 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.858 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.859 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.859 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.860 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.860 243708 DEBUG nova.virt.hardware [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.890 243708 DEBUG nova.storage.rbd_utils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.896 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.919 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.920 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:18 compute-0 nova_compute[243704]: 2025-12-13 04:17:18.920 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:17:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/864188388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.451 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.489 243708 DEBUG nova.virt.libvirt.vif [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1896760900',display_name='tempest-TestVolumeBootPattern-server-1896760900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1896760900',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-5sr1qef2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:14Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=60ab3a2a-719a-47b1-b774-e518b4039ca5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.490 243708 DEBUG nova.network.os_vif_util [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.491 243708 DEBUG nova.network.os_vif_util [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.492 243708 DEBUG nova.objects.instance [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 60ab3a2a-719a-47b1-b774-e518b4039ca5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.511 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <uuid>60ab3a2a-719a-47b1-b774-e518b4039ca5</uuid>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <name>instance-0000000f</name>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-server-1896760900</nova:name>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:17:18</nova:creationTime>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <nova:port uuid="3ef263f2-9b9c-40f2-bc79-8f687b0e16f8">
Dec 13 04:17:19 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <system>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="serial">60ab3a2a-719a-47b1-b774-e518b4039ca5</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="uuid">60ab3a2a-719a-47b1-b774-e518b4039ca5</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </system>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <os>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </os>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <features>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </features>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config">
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-2537ca78-db6c-4c72-bb72-81e5382d8879">
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:19 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <serial>2537ca78-db6c-4c72-bb72-81e5382d8879</serial>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:b1:4a:39"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <target dev="tap3ef263f2-9b"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/console.log" append="off"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <video>
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </video>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:17:19 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:17:19 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:17:19 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:17:19 compute-0 nova_compute[243704]: </domain>
Dec 13 04:17:19 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.513 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Preparing to wait for external event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.513 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.513 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.514 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.515 243708 DEBUG nova.virt.libvirt.vif [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1896760900',display_name='tempest-TestVolumeBootPattern-server-1896760900',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1896760900',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-5sr1qef2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:14Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=60ab3a2a-719a-47b1-b774-e518b4039ca5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.515 243708 DEBUG nova.network.os_vif_util [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.516 243708 DEBUG nova.network.os_vif_util [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.517 243708 DEBUG os_vif [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.518 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.519 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.519 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.523 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.524 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ef263f2-9b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.524 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ef263f2-9b, col_values=(('external_ids', {'iface-id': '3ef263f2-9b9c-40f2-bc79-8f687b0e16f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b1:4a:39', 'vm-uuid': '60ab3a2a-719a-47b1-b774-e518b4039ca5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.526 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:19 compute-0 NetworkManager[48899]: <info>  [1765599439.5276] manager: (tap3ef263f2-9b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/88)
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.528 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:17:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1287: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 2.6 MiB/s wr, 189 op/s
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.533 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.533 243708 INFO os_vif [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b')
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.573 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.573 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.573 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:b1:4a:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.574 243708 INFO nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Using config drive
Dec 13 04:17:19 compute-0 nova_compute[243704]: 2025-12-13 04:17:19.594 243708 DEBUG nova.storage.rbd_utils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Dec 13 04:17:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Dec 13 04:17:19 compute-0 ceph-mon[75071]: osdmap e288: 3 total, 3 up, 3 in
Dec 13 04:17:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/864188388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:19 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.233 243708 INFO nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Creating config drive at /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.243 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuva7_nqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.301 243708 DEBUG nova.network.neutron [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updated VIF entry in instance network info cache for port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.303 243708 DEBUG nova.network.neutron [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updating instance_info_cache with network_info: [{"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.319 243708 DEBUG oslo_concurrency.lockutils [req-93680b68-6b97-474f-a938-0a04bf6d6a53 req-6b28ec2c-e16e-409a-b023-191ebf17cd50 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.380 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.381 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.383 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuva7_nqi" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.408 243708 DEBUG nova.storage.rbd_utils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.412 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config 60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.443 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.530 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.531 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.539 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.540 243708 INFO nova.compute.claims [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.570 243708 DEBUG oslo_concurrency.processutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config 60ab3a2a-719a-47b1-b774-e518b4039ca5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.571 243708 INFO nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Deleting local config drive /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5/disk.config because it was imported into RBD.
Dec 13 04:17:20 compute-0 kernel: tap3ef263f2-9b: entered promiscuous mode
Dec 13 04:17:20 compute-0 NetworkManager[48899]: <info>  [1765599440.6171] manager: (tap3ef263f2-9b): new Tun device (/org/freedesktop/NetworkManager/Devices/89)
Dec 13 04:17:20 compute-0 ovn_controller[145204]: 2025-12-13T04:17:20Z|00153|binding|INFO|Claiming lport 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 for this chassis.
Dec 13 04:17:20 compute-0 ovn_controller[145204]: 2025-12-13T04:17:20Z|00154|binding|INFO|3ef263f2-9b9c-40f2-bc79-8f687b0e16f8: Claiming fa:16:3e:b1:4a:39 10.100.0.7
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.617 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.627 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:4a:39 10.100.0.7'], port_security=['fa:16:3e:b1:4a:39 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '60ab3a2a-719a-47b1-b774-e518b4039ca5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.628 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.629 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:20 compute-0 ovn_controller[145204]: 2025-12-13T04:17:20Z|00155|binding|INFO|Setting lport 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 ovn-installed in OVS
Dec 13 04:17:20 compute-0 ovn_controller[145204]: 2025-12-13T04:17:20Z|00156|binding|INFO|Setting lport 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 up in Southbound
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.641 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.642 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7754441b-7207-4165-8396-ce93462f07c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.643 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc553cd2-51 in ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.645 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc553cd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.645 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6e11da84-2911-47c5-835a-dd3448bb4092]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.647 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fb1e96c3-b201-44a4-9253-e08b06d2087b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 systemd-udevd[264332]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.657 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:20 compute-0 systemd-machined[206767]: New machine qemu-15-instance-0000000f.
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.664 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[29ad0737-85be-415f-ace9-41968bc4f366]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 NetworkManager[48899]: <info>  [1765599440.6745] device (tap3ef263f2-9b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:17:20 compute-0 NetworkManager[48899]: <info>  [1765599440.6752] device (tap3ef263f2-9b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:17:20 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000f.
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.695 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[78022eeb-073f-460a-9293-8b53b22b3f87]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.725 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[04ae9563-a147-4292-a56a-61773248ef95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.734 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[34cea0c2-9274-49cf-a575-83b98cd85b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 NetworkManager[48899]: <info>  [1765599440.7365] manager: (tapfc553cd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/90)
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.813 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[aa46ee32-7325-4093-8dd9-cbcf7eaae679]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.816 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2a118f-fe30-45d7-8e8d-559cf0b09564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 NetworkManager[48899]: <info>  [1765599440.8391] device (tapfc553cd2-50): carrier: link connected
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.840 243708 DEBUG nova.compute.manager [req-c24b954f-2c32-4ce3-8f2a-884660f4e362 req-7b3e8ac2-5911-4e33-b907-e7b40547646e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.841 243708 DEBUG oslo_concurrency.lockutils [req-c24b954f-2c32-4ce3-8f2a-884660f4e362 req-7b3e8ac2-5911-4e33-b907-e7b40547646e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.841 243708 DEBUG oslo_concurrency.lockutils [req-c24b954f-2c32-4ce3-8f2a-884660f4e362 req-7b3e8ac2-5911-4e33-b907-e7b40547646e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.842 243708 DEBUG oslo_concurrency.lockutils [req-c24b954f-2c32-4ce3-8f2a-884660f4e362 req-7b3e8ac2-5911-4e33-b907-e7b40547646e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:20 compute-0 nova_compute[243704]: 2025-12-13 04:17:20.842 243708 DEBUG nova.compute.manager [req-c24b954f-2c32-4ce3-8f2a-884660f4e362 req-7b3e8ac2-5911-4e33-b907-e7b40547646e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Processing event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.846 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[cea91da6-cb0c-4d90-9142-bbe99ee80103]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ceph-mon[75071]: pgmap v1287: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 2.6 MiB/s wr, 189 op/s
Dec 13 04:17:20 compute-0 ceph-mon[75071]: osdmap e289: 3 total, 3 up, 3 in
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.867 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a07cfeb5-8b19-4bf6-a3e5-eb6911ccace7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417951, 'reachable_time': 26105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264383, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.885 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4287bce1-f77b-4594-8112-6c8a0e0e552a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:ae9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 417951, 'tstamp': 417951}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264384, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.903 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b29b7ef9-1081-42ff-aeca-60a9e3314cc4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417951, 'reachable_time': 26105, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264385, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.935 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8b09f131-bc56-47f8-a6af-e454720f5740]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.995 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[73e912e7-1659-48c9-8427-95af5f760c29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.997 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.997 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:20.998 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.037 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:21 compute-0 NetworkManager[48899]: <info>  [1765599441.0379] manager: (tapfc553cd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/91)
Dec 13 04:17:21 compute-0 kernel: tapfc553cd2-50: entered promiscuous mode
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:21.044 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.045 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:21 compute-0 ovn_controller[145204]: 2025-12-13T04:17:21Z|00157|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:21.047 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:21.050 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[663feafe-68b3-4a5f-8978-8c250451e8bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:21.051 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:17:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:21.053 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'env', 'PROCESS_TAG=haproxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.070 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/576767440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.261 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.269 243708 DEBUG nova.compute.provider_tree [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.291 243708 DEBUG nova.scheduler.client.report [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.309 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.777s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.310 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.368 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.373 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.399 243708 INFO nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.419 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:17:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Dec 13 04:17:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Dec 13 04:17:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Dec 13 04:17:21 compute-0 podman[264419]: 2025-12-13 04:17:21.456186961 +0000 UTC m=+0.059163511 container create 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:17:21 compute-0 systemd[1]: Started libpod-conmon-56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104.scope.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.513 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.516 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.516 243708 INFO nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Creating image(s)
Dec 13 04:17:21 compute-0 podman[264419]: 2025-12-13 04:17:21.427004513 +0000 UTC m=+0.029981083 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:17:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1290: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.5 KiB/s wr, 58 op/s
Dec 13 04:17:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82918e87b30d7357b15f208afee80b67583c2625fc2c6575afa9a063f2ef462e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.542 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:21 compute-0 podman[264419]: 2025-12-13 04:17:21.551431018 +0000 UTC m=+0.154407588 container init 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:17:21 compute-0 podman[264419]: 2025-12-13 04:17:21.55783346 +0000 UTC m=+0.160810010 container start 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.568 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:21 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [NOTICE]   (264504) : New worker (264512) forked
Dec 13 04:17:21 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [NOTICE]   (264504) : Loading success.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.608 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.612 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.641 243708 DEBUG nova.policy [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '11e9a1a42b4b4d679693155d71445247', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.682 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.682 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.683 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.683 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.702 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.706 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 a0bacfab-7abb-494c-b56e-2cc236181408_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.724 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599441.6856282, 60ab3a2a-719a-47b1-b774-e518b4039ca5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.725 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] VM Started (Lifecycle Event)
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.728 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.739 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.745 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.747 243708 INFO nova.virt.libvirt.driver [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Instance spawned successfully.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.748 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.753 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.767 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.768 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599441.6858616, 60ab3a2a-719a-47b1-b774-e518b4039ca5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.768 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] VM Paused (Lifecycle Event)
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.773 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.773 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.774 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.774 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.774 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.775 243708 DEBUG nova.virt.libvirt.driver [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.794 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.798 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599441.731924, 60ab3a2a-719a-47b1-b774-e518b4039ca5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.798 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] VM Resumed (Lifecycle Event)
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.830 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.838 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.843 243708 INFO nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Took 5.51 seconds to spawn the instance on the hypervisor.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.844 243708 DEBUG nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.856 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:17:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/576767440' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:21 compute-0 ceph-mon[75071]: osdmap e290: 3 total, 3 up, 3 in
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.910 243708 INFO nova.compute.manager [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Took 8.19 seconds to build instance.
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.926 243708 DEBUG oslo_concurrency.lockutils [None req-17632592-ad20-463c-af50-132e8db723f3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:21 compute-0 nova_compute[243704]: 2025-12-13 04:17:21.950 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 a0bacfab-7abb-494c-b56e-2cc236181408_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.245s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.025 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] resizing rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.116 243708 DEBUG nova.objects.instance [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'migration_context' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.127 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.127 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Ensure instance console log exists: /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.128 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.128 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.128 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.419 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Successfully created port: e455bcda-3fde-4820-991d-0f44c010bb03 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.627 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:22 compute-0 ceph-mon[75071]: pgmap v1290: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 41 KiB/s rd, 4.5 KiB/s wr, 58 op/s
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.926 243708 DEBUG nova.compute.manager [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.927 243708 DEBUG oslo_concurrency.lockutils [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.927 243708 DEBUG oslo_concurrency.lockutils [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.927 243708 DEBUG oslo_concurrency.lockutils [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.927 243708 DEBUG nova.compute.manager [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] No waiting events found dispatching network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:17:22 compute-0 nova_compute[243704]: 2025-12-13 04:17:22.927 243708 WARNING nova.compute.manager [req-70a812aa-b680-49e4-835a-3dd0c74b22dc req-ec2ea939-afd9-4fc7-beb8-9ce027075337 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received unexpected event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 for instance with vm_state active and task_state None.
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.097 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Successfully updated port: e455bcda-3fde-4820-991d-0f44c010bb03 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.120 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.120 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquired lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.120 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.167 243708 DEBUG nova.compute.manager [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-changed-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.167 243708 DEBUG nova.compute.manager [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Refreshing instance network info cache due to event network-changed-e455bcda-3fde-4820-991d-0f44c010bb03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.168 243708 DEBUG oslo_concurrency.lockutils [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:23 compute-0 nova_compute[243704]: 2025-12-13 04:17:23.236 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:17:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/594078962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.8 KiB/s wr, 49 op/s
Dec 13 04:17:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Dec 13 04:17:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/594078962' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Dec 13 04:17:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.346 243708 DEBUG nova.network.neutron [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating instance_info_cache with network_info: [{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.369 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Releasing lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.369 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Instance network_info: |[{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.369 243708 DEBUG oslo_concurrency.lockutils [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.370 243708 DEBUG nova.network.neutron [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Refreshing network info cache for port e455bcda-3fde-4820-991d-0f44c010bb03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.372 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Start _get_guest_xml network_info=[{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.377 243708 WARNING nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.384 243708 DEBUG nova.virt.libvirt.host [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.384 243708 DEBUG nova.virt.libvirt.host [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.387 243708 DEBUG nova.virt.libvirt.host [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.388 243708 DEBUG nova.virt.libvirt.host [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.388 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.388 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.389 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.389 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.389 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.389 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.390 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.390 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.390 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.390 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.390 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.391 243708 DEBUG nova.virt.hardware [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.394 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.527 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:24 compute-0 nova_compute[243704]: 2025-12-13 04:17:24.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:17:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Dec 13 04:17:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172074592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:25 compute-0 nova_compute[243704]: 2025-12-13 04:17:25.415 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:25 compute-0 nova_compute[243704]: 2025-12-13 04:17:25.450 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:25 compute-0 nova_compute[243704]: 2025-12-13 04:17:25.457 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 180 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Dec 13 04:17:25 compute-0 ceph-mon[75071]: pgmap v1291: 305 pgs: 305 active+clean; 134 MiB data, 314 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.8 KiB/s wr, 49 op/s
Dec 13 04:17:25 compute-0 ceph-mon[75071]: osdmap e291: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1371222669' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.051 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.053 243708 DEBUG nova.virt.libvirt.vif [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1933348177',display_name='tempest-VolumesBackupsTest-instance-1933348177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1933348177',id=16,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGu7BT+OlkAXS/5E/R+6mxin4StaJf2AYSa4spXG7EaUPd9zPGdGfIZa8P9sKks4ofV1Bj7ayP2qcemd21rm9iUb4Gw5NQPAIiD+VTs+KWu3lqFLlObvGeCTydEwHUAP1g==',key_name='tempest-keypair-344651295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-di5t03jv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=a0bacfab-7abb-494c-b56e-2cc236181408,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.053 243708 DEBUG nova.network.os_vif_util [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.054 243708 DEBUG nova.network.os_vif_util [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.056 243708 DEBUG nova.objects.instance [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'pci_devices' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.070 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <uuid>a0bacfab-7abb-494c-b56e-2cc236181408</uuid>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <name>instance-00000010</name>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesBackupsTest-instance-1933348177</nova:name>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:17:24</nova:creationTime>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:user uuid="11e9a1a42b4b4d679693155d71445247">tempest-VolumesBackupsTest-951676606-project-member</nova:user>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:project uuid="f5e5c975dd8b4a088c217b330c95ba7b">tempest-VolumesBackupsTest-951676606</nova:project>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <nova:port uuid="e455bcda-3fde-4820-991d-0f44c010bb03">
Dec 13 04:17:26 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <system>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="serial">a0bacfab-7abb-494c-b56e-2cc236181408</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="uuid">a0bacfab-7abb-494c-b56e-2cc236181408</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </system>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <os>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </os>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <features>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </features>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/a0bacfab-7abb-494c-b56e-2cc236181408_disk">
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/a0bacfab-7abb-494c-b56e-2cc236181408_disk.config">
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:26 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:00:dd:ba"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <target dev="tape455bcda-3f"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/console.log" append="off"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <video>
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </video>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:17:26 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:17:26 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:17:26 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:17:26 compute-0 nova_compute[243704]: </domain>
Dec 13 04:17:26 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.072 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Preparing to wait for external event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.072 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.072 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.073 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.074 243708 DEBUG nova.virt.libvirt.vif [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1933348177',display_name='tempest-VolumesBackupsTest-instance-1933348177',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1933348177',id=16,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGu7BT+OlkAXS/5E/R+6mxin4StaJf2AYSa4spXG7EaUPd9zPGdGfIZa8P9sKks4ofV1Bj7ayP2qcemd21rm9iUb4Gw5NQPAIiD+VTs+KWu3lqFLlObvGeCTydEwHUAP1g==',key_name='tempest-keypair-344651295',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-di5t03jv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=a0bacfab-7abb-494c-b56e-2cc236181408,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.074 243708 DEBUG nova.network.os_vif_util [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.075 243708 DEBUG nova.network.os_vif_util [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.075 243708 DEBUG os_vif [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.076 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.077 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.077 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.081 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.082 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape455bcda-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.082 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape455bcda-3f, col_values=(('external_ids', {'iface-id': 'e455bcda-3fde-4820-991d-0f44c010bb03', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:00:dd:ba', 'vm-uuid': 'a0bacfab-7abb-494c-b56e-2cc236181408'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.089 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 NetworkManager[48899]: <info>  [1765599446.0903] manager: (tape455bcda-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.092 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.096 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.096 243708 INFO os_vif [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f')
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.165 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.166 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.166 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No VIF found with MAC fa:16:3e:00:dd:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.167 243708 INFO nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Using config drive
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.198 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Dec 13 04:17:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.469 243708 DEBUG nova.network.neutron [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updated VIF entry in instance network info cache for port e455bcda-3fde-4820-991d-0f44c010bb03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.469 243708 DEBUG nova.network.neutron [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating instance_info_cache with network_info: [{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.487 243708 DEBUG oslo_concurrency.lockutils [req-e600e38a-1367-4087-a40b-0ce9b2737775 req-44a7359e-ffdf-4197-a3c2-712bb1d8eed8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.490 243708 INFO nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Creating config drive at /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.496 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ewdbcv7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.632 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2ewdbcv7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.655 243708 DEBUG nova.storage.rbd_utils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] rbd image a0bacfab-7abb-494c-b56e-2cc236181408_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.659 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config a0bacfab-7abb-494c-b56e-2cc236181408_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.795 243708 DEBUG oslo_concurrency.processutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config a0bacfab-7abb-494c-b56e-2cc236181408_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.796 243708 INFO nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Deleting local config drive /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408/disk.config because it was imported into RBD.
Dec 13 04:17:26 compute-0 kernel: tape455bcda-3f: entered promiscuous mode
Dec 13 04:17:26 compute-0 NetworkManager[48899]: <info>  [1765599446.8411] manager: (tape455bcda-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Dec 13 04:17:26 compute-0 ovn_controller[145204]: 2025-12-13T04:17:26Z|00158|binding|INFO|Claiming lport e455bcda-3fde-4820-991d-0f44c010bb03 for this chassis.
Dec 13 04:17:26 compute-0 ovn_controller[145204]: 2025-12-13T04:17:26Z|00159|binding|INFO|e455bcda-3fde-4820-991d-0f44c010bb03: Claiming fa:16:3e:00:dd:ba 10.100.0.14
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.848 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.852 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:dd:ba 10.100.0.14'], port_security=['fa:16:3e:00:dd:ba 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0bacfab-7abb-494c-b56e-2cc236181408', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '703be203-b6f5-4566-a488-3bb21d810094', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8076cdc-415f-401f-a0fe-b3be303ae9cf, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=e455bcda-3fde-4820-991d-0f44c010bb03) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.853 154842 INFO neutron.agent.ovn.metadata.agent [-] Port e455bcda-3fde-4820-991d-0f44c010bb03 in datapath bfdc82ee-37dc-4f9b-b711-c6c9f87b443a bound to our chassis
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.855 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:17:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3172074592' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:26 compute-0 ceph-mon[75071]: pgmap v1293: 305 pgs: 305 active+clean; 180 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Dec 13 04:17:26 compute-0 ceph-mon[75071]: osdmap e292: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1371222669' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:26 compute-0 ceph-mon[75071]: osdmap e293: 3 total, 3 up, 3 in
Dec 13 04:17:26 compute-0 ovn_controller[145204]: 2025-12-13T04:17:26Z|00160|binding|INFO|Setting lport e455bcda-3fde-4820-991d-0f44c010bb03 ovn-installed in OVS
Dec 13 04:17:26 compute-0 ovn_controller[145204]: 2025-12-13T04:17:26Z|00161|binding|INFO|Setting lport e455bcda-3fde-4820-991d-0f44c010bb03 up in Southbound
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.870 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.871 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc87cb3-d1d9-475e-b6c5-b7ac1853f0cb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.872 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbfdc82ee-31 in ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:17:26 compute-0 systemd-udevd[264792]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.874 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbfdc82ee-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.875 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[417ec9d2-4de4-4e0e-b0ad-20fb53e68e7e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.875 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5e7c97-77b4-4998-b896-50279fe58e55]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 systemd-machined[206767]: New machine qemu-16-instance-00000010.
Dec 13 04:17:26 compute-0 NetworkManager[48899]: <info>  [1765599446.8921] device (tape455bcda-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.889 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[59e3c552-5a80-4ef2-9d3f-22f83472be2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 NetworkManager[48899]: <info>  [1765599446.8932] device (tape455bcda-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:17:26 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-00000010.
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.915 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5b5c61-84b3-442e-8003-15e15e3e3c69]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.921 243708 DEBUG nova.compute.manager [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-changed-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.921 243708 DEBUG nova.compute.manager [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Refreshing instance network info cache due to event network-changed-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.921 243708 DEBUG oslo_concurrency.lockutils [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.922 243708 DEBUG oslo_concurrency.lockutils [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:26 compute-0 nova_compute[243704]: 2025-12-13 04:17:26.922 243708 DEBUG nova.network.neutron [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Refreshing network info cache for port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.945 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[a4fbe638-3d81-4c1d-b11a-f3af4542976f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:26 compute-0 NetworkManager[48899]: <info>  [1765599446.9667] manager: (tapbfdc82ee-30): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Dec 13 04:17:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:26.950 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[83c41523-1844-492d-a952-7b108528d18c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.000 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[0f71ef4f-328f-434d-b9da-8fc76927e429]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.004 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[d2894f02-8bf4-4881-97f4-74277c5004c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 NetworkManager[48899]: <info>  [1765599447.0303] device (tapbfdc82ee-30): carrier: link connected
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.038 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce08bd5-f991-40c4-82bb-c1af13cb2bd4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.059 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[32e2f60c-340d-4bc7-91dc-69ded37a28ee]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdc82ee-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:93:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418570, 'reachable_time': 30230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 264825, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.078 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9035d9a8-1cf2-457a-99f2-8d4d791217e1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:936f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418570, 'tstamp': 418570}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 264826, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.093 243708 DEBUG nova.compute.manager [req-78d41139-b31c-4310-8e14-e1707822548c req-778c4d72-0c2e-405b-ae8f-13a74ef12686 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.093 243708 DEBUG oslo_concurrency.lockutils [req-78d41139-b31c-4310-8e14-e1707822548c req-778c4d72-0c2e-405b-ae8f-13a74ef12686 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.094 243708 DEBUG oslo_concurrency.lockutils [req-78d41139-b31c-4310-8e14-e1707822548c req-778c4d72-0c2e-405b-ae8f-13a74ef12686 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.094 243708 DEBUG oslo_concurrency.lockutils [req-78d41139-b31c-4310-8e14-e1707822548c req-778c4d72-0c2e-405b-ae8f-13a74ef12686 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.094 243708 DEBUG nova.compute.manager [req-78d41139-b31c-4310-8e14-e1707822548c req-778c4d72-0c2e-405b-ae8f-13a74ef12686 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Processing event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.097 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[da20eb6c-e38c-4678-9d51-3564970ff3b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbfdc82ee-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:93:6f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 55], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418570, 'reachable_time': 30230, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 264827, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.130 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cc24c407-bf47-498a-81fb-f643df0a06ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.196 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3870fc8a-ca99-4af9-bce3-56347c602f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.198 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdc82ee-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.198 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.198 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbfdc82ee-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.240 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 NetworkManager[48899]: <info>  [1765599447.2413] manager: (tapbfdc82ee-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Dec 13 04:17:27 compute-0 kernel: tapbfdc82ee-30: entered promiscuous mode
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.245 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.246 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbfdc82ee-30, col_values=(('external_ids', {'iface-id': '5b3ad63e-74a7-458d-893c-885bf85ae008'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.248 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 ovn_controller[145204]: 2025-12-13T04:17:27Z|00162|binding|INFO|Releasing lport 5b3ad63e-74a7-458d-893c-885bf85ae008 from this chassis (sb_readonly=0)
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.273 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.275 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.276 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.279 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab385bc-7538-4216-ab7a-03958b7aa6c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.280 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.pid.haproxy
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID bfdc82ee-37dc-4f9b-b711-c6c9f87b443a
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:17:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:27.280 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'env', 'PROCESS_TAG=haproxy-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bfdc82ee-37dc-4f9b-b711-c6c9f87b443a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.342 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.343 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599447.3421834, a0bacfab-7abb-494c-b56e-2cc236181408 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.343 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] VM Started (Lifecycle Event)
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.346 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.348 243708 INFO nova.virt.libvirt.driver [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Instance spawned successfully.
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.349 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.366 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.371 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.372 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.372 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.372 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.373 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.373 243708 DEBUG nova.virt.libvirt.driver [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.378 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.412 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.412 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599447.342424, a0bacfab-7abb-494c-b56e-2cc236181408 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.412 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] VM Paused (Lifecycle Event)
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.436 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.440 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599447.3455603, a0bacfab-7abb-494c-b56e-2cc236181408 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.440 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] VM Resumed (Lifecycle Event)
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.458 243708 INFO nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Took 5.94 seconds to spawn the instance on the hypervisor.
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.459 243708 DEBUG nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.460 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.466 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.497 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.530 243708 INFO nova.compute.manager [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Took 7.04 seconds to build instance.
Dec 13 04:17:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1296: 305 pgs: 305 active+clean; 180 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.547 243708 DEBUG oslo_concurrency.lockutils [None req-28441eb6-b363-4942-a92f-96bfe299d823 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:27 compute-0 nova_compute[243704]: 2025-12-13 04:17:27.629 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:27 compute-0 podman[264902]: 2025-12-13 04:17:27.700156314 +0000 UTC m=+0.059137370 container create 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:27 compute-0 systemd[1]: Started libpod-conmon-479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16.scope.
Dec 13 04:17:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:27 compute-0 podman[264902]: 2025-12-13 04:17:27.664861529 +0000 UTC m=+0.023842605 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:17:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd0bdf963fe28dff2050ce0eff2be92f2a19d9a21dd03fc629f932c3f988d3c9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:27 compute-0 podman[264902]: 2025-12-13 04:17:27.77692577 +0000 UTC m=+0.135906846 container init 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:27 compute-0 podman[264902]: 2025-12-13 04:17:27.783390836 +0000 UTC m=+0.142371892 container start 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 04:17:27 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [NOTICE]   (264921) : New worker (264923) forked
Dec 13 04:17:27 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [NOTICE]   (264921) : Loading success.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Dec 13 04:17:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Dec 13 04:17:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.886843) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447886941, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1372, "num_deletes": 259, "total_data_size": 1829088, "memory_usage": 1855568, "flush_reason": "Manual Compaction"}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447898983, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1805060, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25115, "largest_seqno": 26486, "table_properties": {"data_size": 1798108, "index_size": 4027, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15362, "raw_average_key_size": 21, "raw_value_size": 1784053, "raw_average_value_size": 2470, "num_data_blocks": 176, "num_entries": 722, "num_filter_entries": 722, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599376, "oldest_key_time": 1765599376, "file_creation_time": 1765599447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 12205 microseconds, and 5319 cpu microseconds.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.899034) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1805060 bytes OK
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.899071) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.902561) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.902623) EVENT_LOG_v1 {"time_micros": 1765599447902577, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.902643) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1822573, prev total WAL file size 1822573, number of live WAL files 2.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.903772) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1762KB)], [56(10MB)]
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447904225, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 12884068, "oldest_snapshot_seqno": -1}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5526 keys, 11163523 bytes, temperature: kUnknown
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447988169, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 11163523, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11120161, "index_size": 28468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 137437, "raw_average_key_size": 24, "raw_value_size": 11014324, "raw_average_value_size": 1993, "num_data_blocks": 1171, "num_entries": 5526, "num_filter_entries": 5526, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599447, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.988529) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 11163523 bytes
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.989917) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 153.4 rd, 132.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 10.6 +0.0 blob) out(10.6 +0.0 blob), read-write-amplify(13.3) write-amplify(6.2) OK, records in: 6055, records dropped: 529 output_compression: NoCompression
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.990088) EVENT_LOG_v1 {"time_micros": 1765599447989927, "job": 30, "event": "compaction_finished", "compaction_time_micros": 84013, "compaction_time_cpu_micros": 44454, "output_level": 6, "num_output_files": 1, "total_output_size": 11163523, "num_input_records": 6055, "num_output_records": 5526, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447990861, "job": 30, "event": "table_file_deletion", "file_number": 58}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599447993651, "job": 30, "event": "table_file_deletion", "file_number": 56}
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.903188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.993746) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.993752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.993753) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.993755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:27 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:17:27.993757) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:17:28 compute-0 nova_compute[243704]: 2025-12-13 04:17:28.025 243708 DEBUG nova.network.neutron [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updated VIF entry in instance network info cache for port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:17:28 compute-0 nova_compute[243704]: 2025-12-13 04:17:28.026 243708 DEBUG nova.network.neutron [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updating instance_info_cache with network_info: [{"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:28 compute-0 nova_compute[243704]: 2025-12-13 04:17:28.039 243708 DEBUG oslo_concurrency.lockutils [req-40ac35ef-3d82-4357-bbf0-4f97a2cc815c req-56944f88-9a23-482d-afb9-27a53debd162 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-60ab3a2a-719a-47b1-b774-e518b4039ca5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1462989139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1462989139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:28 compute-0 ceph-mon[75071]: pgmap v1296: 305 pgs: 305 active+clean; 180 MiB data, 335 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 261 op/s
Dec 13 04:17:28 compute-0 ceph-mon[75071]: osdmap e294: 3 total, 3 up, 3 in
Dec 13 04:17:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1462989139' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1462989139' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.166 243708 DEBUG nova.compute.manager [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.167 243708 DEBUG oslo_concurrency.lockutils [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.167 243708 DEBUG oslo_concurrency.lockutils [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.168 243708 DEBUG oslo_concurrency.lockutils [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.168 243708 DEBUG nova.compute.manager [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] No waiting events found dispatching network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:17:29 compute-0 nova_compute[243704]: 2025-12-13 04:17:29.168 243708 WARNING nova.compute.manager [req-4084e246-ee92-4afc-8104-c6a308d20b1c req-776bada6-6c5e-432e-b67f-69d9ea2b9692 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received unexpected event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 for instance with vm_state active and task_state None.
Dec 13 04:17:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1298: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 1.9 MiB/s wr, 473 op/s
Dec 13 04:17:29 compute-0 podman[264932]: 2025-12-13 04:17:29.939925054 +0000 UTC m=+0.089458190 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:17:30 compute-0 nova_compute[243704]: 2025-12-13 04:17:30.146 243708 DEBUG nova.compute.manager [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-changed-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:30 compute-0 nova_compute[243704]: 2025-12-13 04:17:30.147 243708 DEBUG nova.compute.manager [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Refreshing instance network info cache due to event network-changed-e455bcda-3fde-4820-991d-0f44c010bb03. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:17:30 compute-0 nova_compute[243704]: 2025-12-13 04:17:30.147 243708 DEBUG oslo_concurrency.lockutils [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:30 compute-0 nova_compute[243704]: 2025-12-13 04:17:30.148 243708 DEBUG oslo_concurrency.lockutils [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:30 compute-0 nova_compute[243704]: 2025-12-13 04:17:30.148 243708 DEBUG nova.network.neutron [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Refreshing network info cache for port e455bcda-3fde-4820-991d-0f44c010bb03 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:17:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/391356730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:30 compute-0 ceph-mon[75071]: pgmap v1298: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 1.9 MiB/s wr, 473 op/s
Dec 13 04:17:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/391356730' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:31 compute-0 nova_compute[243704]: 2025-12-13 04:17:31.090 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Dec 13 04:17:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Dec 13 04:17:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Dec 13 04:17:31 compute-0 nova_compute[243704]: 2025-12-13 04:17:31.466 243708 DEBUG nova.network.neutron [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updated VIF entry in instance network info cache for port e455bcda-3fde-4820-991d-0f44c010bb03. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:17:31 compute-0 nova_compute[243704]: 2025-12-13 04:17:31.467 243708 DEBUG nova.network.neutron [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating instance_info_cache with network_info: [{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 36 KiB/s wr, 269 op/s
Dec 13 04:17:31 compute-0 nova_compute[243704]: 2025-12-13 04:17:31.538 243708 DEBUG oslo_concurrency.lockutils [req-a264bcf9-e1a4-4b54-aab3-f64d36a12d37 req-ae720d4f-5b70-4b1d-b56f-0f73ee3243e3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Dec 13 04:17:32 compute-0 ceph-mon[75071]: osdmap e295: 3 total, 3 up, 3 in
Dec 13 04:17:32 compute-0 ceph-mon[75071]: pgmap v1300: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 36 KiB/s wr, 269 op/s
Dec 13 04:17:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Dec 13 04:17:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Dec 13 04:17:32 compute-0 nova_compute[243704]: 2025-12-13 04:17:32.630 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 247 op/s
Dec 13 04:17:33 compute-0 ceph-mon[75071]: osdmap e296: 3 total, 3 up, 3 in
Dec 13 04:17:33 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:17:34 compute-0 ceph-mon[75071]: pgmap v1302: 305 pgs: 305 active+clean; 181 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 33 KiB/s wr, 247 op/s
Dec 13 04:17:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:35.092 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:35.093 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:35.093 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Dec 13 04:17:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Dec 13 04:17:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Dec 13 04:17:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 192 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Dec 13 04:17:36 compute-0 nova_compute[243704]: 2025-12-13 04:17:36.104 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:36 compute-0 ceph-mon[75071]: osdmap e297: 3 total, 3 up, 3 in
Dec 13 04:17:36 compute-0 ceph-mon[75071]: pgmap v1304: 305 pgs: 305 active+clean; 192 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 2.7 MiB/s wr, 70 op/s
Dec 13 04:17:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:36 compute-0 ovn_controller[145204]: 2025-12-13T04:17:36Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b1:4a:39 10.100.0.7
Dec 13 04:17:36 compute-0 ovn_controller[145204]: 2025-12-13T04:17:36Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b1:4a:39 10.100.0.7
Dec 13 04:17:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:36 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1524641901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:36 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1524641901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1524641901' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1524641901' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1305: 305 pgs: 305 active+clean; 192 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Dec 13 04:17:37 compute-0 nova_compute[243704]: 2025-12-13 04:17:37.633 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:38 compute-0 ceph-mon[75071]: pgmap v1305: 305 pgs: 305 active+clean; 192 MiB data, 350 MiB used, 60 GiB / 60 GiB avail; 87 KiB/s rd, 2.7 MiB/s wr, 69 op/s
Dec 13 04:17:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 152 op/s
Dec 13 04:17:39 compute-0 podman[264958]: 2025-12-13 04:17:39.937178872 +0000 UTC m=+0.072961044 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 04:17:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:17:40
Dec 13 04:17:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:17:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:17:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'vms', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'volumes', 'images', 'default.rgw.log', '.rgw.root']
Dec 13 04:17:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:17:41 compute-0 nova_compute[243704]: 2025-12-13 04:17:41.107 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385225324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:41 compute-0 ceph-mon[75071]: pgmap v1306: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 152 op/s
Dec 13 04:17:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2385225324' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Dec 13 04:17:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Dec 13 04:17:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Dec 13 04:17:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 152 op/s
Dec 13 04:17:41 compute-0 ovn_controller[145204]: 2025-12-13T04:17:41Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:00:dd:ba 10.100.0.14
Dec 13 04:17:41 compute-0 ovn_controller[145204]: 2025-12-13T04:17:41Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:00:dd:ba 10.100.0.14
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:17:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Dec 13 04:17:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Dec 13 04:17:42 compute-0 ceph-mon[75071]: osdmap e298: 3 total, 3 up, 3 in
Dec 13 04:17:42 compute-0 ceph-mon[75071]: pgmap v1308: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 152 op/s
Dec 13 04:17:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.506 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.506 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.507 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.507 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.507 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.508 243708 INFO nova.compute.manager [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Terminating instance
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.509 243708 DEBUG nova.compute.manager [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:17:42 compute-0 kernel: tap3ef263f2-9b (unregistering): left promiscuous mode
Dec 13 04:17:42 compute-0 NetworkManager[48899]: <info>  [1765599462.5475] device (tap3ef263f2-9b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:17:42 compute-0 ovn_controller[145204]: 2025-12-13T04:17:42Z|00163|binding|INFO|Releasing lport 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 from this chassis (sb_readonly=0)
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.558 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 ovn_controller[145204]: 2025-12-13T04:17:42Z|00164|binding|INFO|Setting lport 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 down in Southbound
Dec 13 04:17:42 compute-0 ovn_controller[145204]: 2025-12-13T04:17:42Z|00165|binding|INFO|Removing iface tap3ef263f2-9b ovn-installed in OVS
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.563 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.566 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b1:4a:39 10.100.0.7'], port_security=['fa:16:3e:b1:4a:39 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '60ab3a2a-719a-47b1-b774-e518b4039ca5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.567 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.569 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.571 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e6632a0e-e363-4c9b-80e3-01ddeb0af213]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.572 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace which is not needed anymore
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.580 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Dec 13 04:17:42 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000f.scope: Consumed 13.820s CPU time.
Dec 13 04:17:42 compute-0 systemd-machined[206767]: Machine qemu-15-instance-0000000f terminated.
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.634 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [NOTICE]   (264504) : haproxy version is 2.8.14-c23fe91
Dec 13 04:17:42 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [NOTICE]   (264504) : path to executable is /usr/sbin/haproxy
Dec 13 04:17:42 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [WARNING]  (264504) : Exiting Master process...
Dec 13 04:17:42 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [ALERT]    (264504) : Current worker (264512) exited with code 143 (Terminated)
Dec 13 04:17:42 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[264434]: [WARNING]  (264504) : All workers exited. Exiting... (0)
Dec 13 04:17:42 compute-0 systemd[1]: libpod-56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104.scope: Deactivated successfully.
Dec 13 04:17:42 compute-0 podman[265002]: 2025-12-13 04:17:42.72846541 +0000 UTC m=+0.052635155 container died 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.752 243708 INFO nova.virt.libvirt.driver [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Instance destroyed successfully.
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.753 243708 DEBUG nova.objects.instance [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 60ab3a2a-719a-47b1-b774-e518b4039ca5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104-userdata-shm.mount: Deactivated successfully.
Dec 13 04:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-82918e87b30d7357b15f208afee80b67583c2625fc2c6575afa9a063f2ef462e-merged.mount: Deactivated successfully.
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.764 243708 DEBUG nova.virt.libvirt.vif [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:17:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1896760900',display_name='tempest-TestVolumeBootPattern-server-1896760900',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1896760900',id=15,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:17:21Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-5sr1qef2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:17:21Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=60ab3a2a-719a-47b1-b774-e518b4039ca5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.766 243708 DEBUG nova.network.os_vif_util [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "address": "fa:16:3e:b1:4a:39", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ef263f2-9b", "ovs_interfaceid": "3ef263f2-9b9c-40f2-bc79-8f687b0e16f8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.773 243708 DEBUG nova.network.os_vif_util [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.776 243708 DEBUG os_vif [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.779 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.779 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ef263f2-9b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:42 compute-0 podman[265002]: 2025-12-13 04:17:42.780513877 +0000 UTC m=+0.104683622 container cleanup 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.781 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.783 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.787 243708 INFO os_vif [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b1:4a:39,bridge_name='br-int',has_traffic_filtering=True,id=3ef263f2-9b9c-40f2-bc79-8f687b0e16f8,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ef263f2-9b')
Dec 13 04:17:42 compute-0 systemd[1]: libpod-conmon-56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104.scope: Deactivated successfully.
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.818 243708 DEBUG nova.compute.manager [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-unplugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.819 243708 DEBUG oslo_concurrency.lockutils [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.819 243708 DEBUG oslo_concurrency.lockutils [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.819 243708 DEBUG oslo_concurrency.lockutils [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.819 243708 DEBUG nova.compute.manager [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] No waiting events found dispatching network-vif-unplugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.820 243708 DEBUG nova.compute.manager [req-adc265bc-7072-477e-985b-cd103d742fc5 req-fe49220f-ce84-4d69-9b67-d70ede970845 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-unplugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:17:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:17:42 compute-0 podman[265040]: 2025-12-13 04:17:42.860606534 +0000 UTC m=+0.053935500 container remove 56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.866 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9299f0ff-015d-4579-99f0-d35e8b9c08db]: (4, ('Sat Dec 13 04:17:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104)\n56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104\nSat Dec 13 04:17:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104)\n56b582233587dcc361c9c6af909bf32c697a8617c09fb32b1c585b76c64a8104\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.869 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cca31ed8-75c9-4e2c-88c5-656d1580ac67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.870 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:42 compute-0 kernel: tapfc553cd2-50: left promiscuous mode
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.871 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.889 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.894 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[209b6e4c-db7d-4390-9ed9-c3bcf08b29b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.908 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[49a6da61-2dd8-4f7b-b75a-b4228753f69a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.912 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8013103a-da2f-4e85-9aa7-9809182dff9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.928 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e2185214-e83c-45b9-b916-3f3df459c3ad]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 417939, 'reachable_time': 43913, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265071, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc553cd2\x2d5dd5\x2d4d87\x2d97af\x2d4b4eeb4ca0b0.mount: Deactivated successfully.
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.933 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:17:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:42.933 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[00c1e6c1-6d6b-4155-a99b-8f14b37527b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.947 243708 INFO nova.virt.libvirt.driver [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Deleting instance files /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5_del
Dec 13 04:17:42 compute-0 nova_compute[243704]: 2025-12-13 04:17:42.948 243708 INFO nova.virt.libvirt.driver [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Deletion of /var/lib/nova/instances/60ab3a2a-719a-47b1-b774-e518b4039ca5_del complete
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.006 243708 INFO nova.compute.manager [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Took 0.50 seconds to destroy the instance on the hypervisor.
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.006 243708 DEBUG oslo.service.loopingcall [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.007 243708 DEBUG nova.compute.manager [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.007 243708 DEBUG nova.network.neutron [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:17:43 compute-0 ceph-mon[75071]: osdmap e299: 3 total, 3 up, 3 in
Dec 13 04:17:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1310: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 503 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.841 243708 DEBUG nova.network.neutron [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:43 compute-0 nova_compute[243704]: 2025-12-13 04:17:43.858 243708 INFO nova.compute.manager [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Took 0.85 seconds to deallocate network for instance.
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.016 243708 INFO nova.compute.manager [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Took 0.16 seconds to detach 1 volumes for instance.
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.073 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.074 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.150 243708 DEBUG oslo_concurrency.processutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Dec 13 04:17:44 compute-0 ceph-mon[75071]: pgmap v1310: 305 pgs: 305 active+clean; 215 MiB data, 379 MiB used, 60 GiB / 60 GiB avail; 503 KiB/s rd, 1.5 MiB/s wr, 99 op/s
Dec 13 04:17:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Dec 13 04:17:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Dec 13 04:17:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743801517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.728 243708 DEBUG oslo_concurrency.processutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.734 243708 DEBUG nova.compute.provider_tree [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.750 243708 DEBUG nova.scheduler.client.report [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:17:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269887510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1269887510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.770 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.795 243708 INFO nova.scheduler.client.report [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 60ab3a2a-719a-47b1-b774-e518b4039ca5
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.861 243708 DEBUG oslo_concurrency.lockutils [None req-14f184bb-06cf-4722-97c3-a1d41d36a11c 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.355s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.875 243708 DEBUG nova.compute.manager [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.875 243708 DEBUG oslo_concurrency.lockutils [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.875 243708 DEBUG oslo_concurrency.lockutils [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.875 243708 DEBUG oslo_concurrency.lockutils [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "60ab3a2a-719a-47b1-b774-e518b4039ca5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.876 243708 DEBUG nova.compute.manager [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] No waiting events found dispatching network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.876 243708 WARNING nova.compute.manager [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received unexpected event network-vif-plugged-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 for instance with vm_state deleted and task_state None.
Dec 13 04:17:44 compute-0 nova_compute[243704]: 2025-12-13 04:17:44.876 243708 DEBUG nova.compute.manager [req-94b043cd-94bd-4de1-b5c4-10d70be999f1 req-c29b3302-4dc7-44df-a251-6740bbb08f6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Received event network-vif-deleted-3ef263f2-9b9c-40f2-bc79-8f687b0e16f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:45 compute-0 ceph-mon[75071]: osdmap e300: 3 total, 3 up, 3 in
Dec 13 04:17:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3743801517' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1269887510' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1269887510' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 694 KiB/s rd, 3.8 MiB/s wr, 186 op/s
Dec 13 04:17:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4184680735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4184680735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:45 compute-0 sudo[265096]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:17:45 compute-0 sudo[265096]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:45 compute-0 sudo[265096]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:45 compute-0 sudo[265121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:17:45 compute-0 sudo[265121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:46 compute-0 ceph-mon[75071]: pgmap v1312: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 694 KiB/s rd, 3.8 MiB/s wr, 186 op/s
Dec 13 04:17:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4184680735' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4184680735' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:46 compute-0 sudo[265121]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:17:46 compute-0 sudo[265177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:17:46 compute-0 sudo[265177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:46 compute-0 sudo[265177]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:46 compute-0 sudo[265202]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:17:46 compute-0 sudo[265202]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/170272418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.074363803 +0000 UTC m=+0.068333329 container create 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:17:47 compute-0 systemd[1]: Started libpod-conmon-4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac.scope.
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.04654965 +0000 UTC m=+0.040519276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.166866635 +0000 UTC m=+0.160836201 container init 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.178314704 +0000 UTC m=+0.172284240 container start 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.181680865 +0000 UTC m=+0.175650451 container attach 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:17:47 compute-0 quizzical_gould[265256]: 167 167
Dec 13 04:17:47 compute-0 systemd[1]: libpod-4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac.scope: Deactivated successfully.
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.19368079 +0000 UTC m=+0.187650336 container died 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:17:47 compute-0 podman[265253]: 2025-12-13 04:17:47.200540165 +0000 UTC m=+0.083999073 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 04:17:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-f4019661b3f0b86dfb1b591531bf938cb537d52e2de3d54cf3c6dba1005a26b5-merged.mount: Deactivated successfully.
Dec 13 04:17:47 compute-0 podman[265239]: 2025-12-13 04:17:47.230381003 +0000 UTC m=+0.224350559 container remove 4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:17:47 compute-0 systemd[1]: libpod-conmon-4747464b224f0b1d8d88543ba55c84bba461b0cee3d509cc0730edbd0e4677ac.scope: Deactivated successfully.
Dec 13 04:17:47 compute-0 podman[265299]: 2025-12-13 04:17:47.427620737 +0000 UTC m=+0.048435121 container create 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:17:47 compute-0 systemd[1]: Started libpod-conmon-1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328.scope.
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:17:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/170272418' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:47 compute-0 podman[265299]: 2025-12-13 04:17:47.407189405 +0000 UTC m=+0.028003809 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:47 compute-0 podman[265299]: 2025-12-13 04:17:47.521208508 +0000 UTC m=+0.142022942 container init 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:17:47 compute-0 podman[265299]: 2025-12-13 04:17:47.531362543 +0000 UTC m=+0.152176927 container start 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:17:47 compute-0 podman[265299]: 2025-12-13 04:17:47.535746892 +0000 UTC m=+0.156561276 container attach 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1313: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 684 KiB/s rd, 3.8 MiB/s wr, 183 op/s
Dec 13 04:17:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Dec 13 04:17:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Dec 13 04:17:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Dec 13 04:17:47 compute-0 nova_compute[243704]: 2025-12-13 04:17:47.639 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:47 compute-0 nova_compute[243704]: 2025-12-13 04:17:47.782 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:48 compute-0 ecstatic_wu[265315]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:17:48 compute-0 ecstatic_wu[265315]: --> All data devices are unavailable
Dec 13 04:17:48 compute-0 systemd[1]: libpod-1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328.scope: Deactivated successfully.
Dec 13 04:17:48 compute-0 podman[265299]: 2025-12-13 04:17:48.076881698 +0000 UTC m=+0.697696082 container died 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-45da2b218671abf82df700882a84b7be13cde91b0b6af61b5d5f72ed38d4dc62-merged.mount: Deactivated successfully.
Dec 13 04:17:48 compute-0 podman[265299]: 2025-12-13 04:17:48.125220765 +0000 UTC m=+0.746035149 container remove 1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_wu, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:17:48 compute-0 systemd[1]: libpod-conmon-1fdd72b80c5644ccd156c34d3d857567c53d40073093363732e23b10885ca328.scope: Deactivated successfully.
Dec 13 04:17:48 compute-0 sudo[265202]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:48 compute-0 sudo[265347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:17:48 compute-0 sudo[265347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:48 compute-0 sudo[265347]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.252 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.253 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.267 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:17:48 compute-0 sudo[265372]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:17:48 compute-0 sudo[265372]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.321 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.322 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.331 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.331 243708 INFO nova.compute.claims [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:17:48 compute-0 nova_compute[243704]: 2025-12-13 04:17:48.519 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.595908796 +0000 UTC m=+0.029872489 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.721351269 +0000 UTC m=+0.155314912 container create c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Dec 13 04:17:48 compute-0 ceph-mon[75071]: pgmap v1313: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 684 KiB/s rd, 3.8 MiB/s wr, 183 op/s
Dec 13 04:17:48 compute-0 ceph-mon[75071]: osdmap e301: 3 total, 3 up, 3 in
Dec 13 04:17:48 compute-0 systemd[1]: Started libpod-conmon-c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0.scope.
Dec 13 04:17:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.794627541 +0000 UTC m=+0.228591204 container init c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.801842546 +0000 UTC m=+0.235806189 container start c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:48 compute-0 eager_kepler[265445]: 167 167
Dec 13 04:17:48 compute-0 systemd[1]: libpod-c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0.scope: Deactivated successfully.
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.807174 +0000 UTC m=+0.241137673 container attach c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.807728425 +0000 UTC m=+0.241692088 container died c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:17:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b6acd87b341e7cf9b9d1d44f9b55fcb968795bb5b94542f3bdce69b9f997ea3-merged.mount: Deactivated successfully.
Dec 13 04:17:48 compute-0 podman[265410]: 2025-12-13 04:17:48.85300096 +0000 UTC m=+0.286964613 container remove c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_kepler, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:17:48 compute-0 systemd[1]: libpod-conmon-c0fc651934ebb3ee77340568106370fec42c47746c90dfba6565d8ae48fc49a0.scope: Deactivated successfully.
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.098306404 +0000 UTC m=+0.061616418 container create 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:17:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:17:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4293089970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:49 compute-0 systemd[1]: Started libpod-conmon-75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594.scope.
Dec 13 04:17:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.151 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.631s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078b0360374744562c8d8440dc67e2a25a25ed1d0a1a415f415371fc4ff59b5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.072323631 +0000 UTC m=+0.035633705 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078b0360374744562c8d8440dc67e2a25a25ed1d0a1a415f415371fc4ff59b5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078b0360374744562c8d8440dc67e2a25a25ed1d0a1a415f415371fc4ff59b5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/078b0360374744562c8d8440dc67e2a25a25ed1d0a1a415f415371fc4ff59b5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.158 243708 DEBUG nova.compute.provider_tree [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.169678655 +0000 UTC m=+0.132988699 container init 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.177173798 +0000 UTC m=+0.140483812 container start 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.179 243708 DEBUG nova.scheduler.client.report [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.18094737 +0000 UTC m=+0.144257384 container attach 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.198 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.199 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.238 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.238 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.263 243708 INFO nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.284 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.358 243708 INFO nova.virt.block_device [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Booting with volume 2537ca78-db6c-4c72-bb72-81e5382d8879 at /dev/vda
Dec 13 04:17:49 compute-0 funny_wu[265487]: {
Dec 13 04:17:49 compute-0 funny_wu[265487]:     "0": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:         {
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "devices": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "/dev/loop3"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             ],
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_name": "ceph_lv0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_size": "21470642176",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "name": "ceph_lv0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "tags": {
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_name": "ceph",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.crush_device_class": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.encrypted": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.objectstore": "bluestore",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_id": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.vdo": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.with_tpm": "0"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             },
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "vg_name": "ceph_vg0"
Dec 13 04:17:49 compute-0 funny_wu[265487]:         }
Dec 13 04:17:49 compute-0 funny_wu[265487]:     ],
Dec 13 04:17:49 compute-0 funny_wu[265487]:     "1": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:         {
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "devices": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "/dev/loop4"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             ],
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_name": "ceph_lv1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_size": "21470642176",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "name": "ceph_lv1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "tags": {
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_name": "ceph",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.crush_device_class": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.encrypted": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.objectstore": "bluestore",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_id": "1",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.vdo": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.with_tpm": "0"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             },
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "vg_name": "ceph_vg1"
Dec 13 04:17:49 compute-0 funny_wu[265487]:         }
Dec 13 04:17:49 compute-0 funny_wu[265487]:     ],
Dec 13 04:17:49 compute-0 funny_wu[265487]:     "2": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:         {
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "devices": [
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "/dev/loop5"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             ],
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_name": "ceph_lv2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_size": "21470642176",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "name": "ceph_lv2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "tags": {
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.cluster_name": "ceph",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.crush_device_class": "",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.encrypted": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.objectstore": "bluestore",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osd_id": "2",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.vdo": "0",
Dec 13 04:17:49 compute-0 funny_wu[265487]:                 "ceph.with_tpm": "0"
Dec 13 04:17:49 compute-0 funny_wu[265487]:             },
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "type": "block",
Dec 13 04:17:49 compute-0 funny_wu[265487]:             "vg_name": "ceph_vg2"
Dec 13 04:17:49 compute-0 funny_wu[265487]:         }
Dec 13 04:17:49 compute-0 funny_wu[265487]:     ]
Dec 13 04:17:49 compute-0 funny_wu[265487]: }
Dec 13 04:17:49 compute-0 systemd[1]: libpod-75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594.scope: Deactivated successfully.
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.53957243 +0000 UTC m=+0.502882484 container died 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:17:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1315: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 644 KiB/s rd, 3.3 MiB/s wr, 232 op/s
Dec 13 04:17:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-078b0360374744562c8d8440dc67e2a25a25ed1d0a1a415f415371fc4ff59b5e-merged.mount: Deactivated successfully.
Dec 13 04:17:49 compute-0 podman[265469]: 2025-12-13 04:17:49.592921992 +0000 UTC m=+0.556232006 container remove 75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_wu, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:17:49 compute-0 systemd[1]: libpod-conmon-75742742f1cf23327199767b322bb1144ec864e8bea4dc614f5fb8f4973db594.scope: Deactivated successfully.
Dec 13 04:17:49 compute-0 sudo[265372]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.679 243708 DEBUG nova.policy [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.696 243708 DEBUG os_brick.utils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.698 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.713 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.714 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[2fec54a0-b440-4957-ba43-c3bfaaf94376]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.717 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:49 compute-0 sudo[265511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:17:49 compute-0 sudo[265511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.726 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.727 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cce1c940-c154-49d4-a310-815905856501]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:49 compute-0 sudo[265511]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.730 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Dec 13 04:17:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4293089970' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.740 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.741 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[defc223c-36ca-4d2c-be03-f86a3e7c76b3]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.743 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1f003c-953c-4c53-b978-04323980e5f2]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.745 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Dec 13 04:17:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.772 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.774 243708 DEBUG os_brick.initiator.connectors.lightos [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.774 243708 DEBUG os_brick.initiator.connectors.lightos [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.775 243708 DEBUG os_brick.initiator.connectors.lightos [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.775 243708 DEBUG os_brick.utils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:17:49 compute-0 nova_compute[243704]: 2025-12-13 04:17:49.775 243708 DEBUG nova.virt.block_device [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating existing volume attachment record: d1525d54-822e-47d6-bc89-9f68acb8188e _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:17:49 compute-0 sudo[265542]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:17:49 compute-0 sudo[265542]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:17:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653210416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:17:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3653210416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.089569675 +0000 UTC m=+0.055691587 container create c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:17:50 compute-0 systemd[1]: Started libpod-conmon-c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba.scope.
Dec 13 04:17:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.062480673 +0000 UTC m=+0.028602675 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.168763958 +0000 UTC m=+0.134885890 container init c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.17551458 +0000 UTC m=+0.141636502 container start c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.179023915 +0000 UTC m=+0.145145937 container attach c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:17:50 compute-0 stupefied_curie[265599]: 167 167
Dec 13 04:17:50 compute-0 systemd[1]: libpod-c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba.scope: Deactivated successfully.
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.182105128 +0000 UTC m=+0.148227080 container died c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:17:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6abc43d58ea2b3c1ca9390dc1a9d27986929fa736f1d3e038502a8f2b4642a2-merged.mount: Deactivated successfully.
Dec 13 04:17:50 compute-0 podman[265582]: 2025-12-13 04:17:50.218592645 +0000 UTC m=+0.184714557 container remove c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_curie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:50 compute-0 systemd[1]: libpod-conmon-c2832cba9e213552daccd881969a864a1ab0fbbd49c67e39c53a6901bf41f5ba.scope: Deactivated successfully.
Dec 13 04:17:50 compute-0 podman[265622]: 2025-12-13 04:17:50.39918401 +0000 UTC m=+0.042871661 container create fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.402 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Successfully created port: ad72d283-b1a5-4889-9e04-0297897b4cad _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:17:50 compute-0 systemd[1]: Started libpod-conmon-fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e.scope.
Dec 13 04:17:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4743025daf55588a59a4942a0cef5f91bb9d9aece6666e66a1dcb9362223c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4743025daf55588a59a4942a0cef5f91bb9d9aece6666e66a1dcb9362223c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4743025daf55588a59a4942a0cef5f91bb9d9aece6666e66a1dcb9362223c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/de4743025daf55588a59a4942a0cef5f91bb9d9aece6666e66a1dcb9362223c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:50 compute-0 podman[265622]: 2025-12-13 04:17:50.475210736 +0000 UTC m=+0.118898397 container init fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True)
Dec 13 04:17:50 compute-0 podman[265622]: 2025-12-13 04:17:50.38073061 +0000 UTC m=+0.024418271 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:17:50 compute-0 podman[265622]: 2025-12-13 04:17:50.482902294 +0000 UTC m=+0.126589955 container start fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:17:50 compute-0 podman[265622]: 2025-12-13 04:17:50.486241764 +0000 UTC m=+0.129929405 container attach fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:17:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/182923821' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:50 compute-0 ceph-mon[75071]: pgmap v1315: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 644 KiB/s rd, 3.3 MiB/s wr, 232 op/s
Dec 13 04:17:50 compute-0 ceph-mon[75071]: osdmap e302: 3 total, 3 up, 3 in
Dec 13 04:17:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3653210416' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:17:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3653210416' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:17:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/182923821' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.853 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.855 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.855 243708 INFO nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Creating image(s)
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.855 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.855 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Ensure instance console log exists: /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.856 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.856 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:50 compute-0 nova_compute[243704]: 2025-12-13 04:17:50.856 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:51 compute-0 lvm[265714]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:17:51 compute-0 lvm[265714]: VG ceph_vg0 finished
Dec 13 04:17:51 compute-0 lvm[265717]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:17:51 compute-0 lvm[265717]: VG ceph_vg1 finished
Dec 13 04:17:51 compute-0 lvm[265719]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:17:51 compute-0 lvm[265719]: VG ceph_vg2 finished
Dec 13 04:17:51 compute-0 competent_dijkstra[265638]: {}
Dec 13 04:17:51 compute-0 systemd[1]: libpod-fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e.scope: Deactivated successfully.
Dec 13 04:17:51 compute-0 systemd[1]: libpod-fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e.scope: Consumed 1.322s CPU time.
Dec 13 04:17:51 compute-0 podman[265622]: 2025-12-13 04:17:51.320664443 +0000 UTC m=+0.964352084 container died fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:17:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Dec 13 04:17:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 24 KiB/s wr, 75 op/s
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.874 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Successfully updated port: ad72d283-b1a5-4889-9e04-0297897b4cad _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.968 243708 DEBUG nova.compute.manager [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.969 243708 DEBUG nova.compute.manager [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing instance network info cache due to event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.969 243708 DEBUG oslo_concurrency.lockutils [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.969 243708 DEBUG oslo_concurrency.lockutils [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:51 compute-0 nova_compute[243704]: 2025-12-13 04:17:51.969 243708 DEBUG nova.network.neutron [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing network info cache for port ad72d283-b1a5-4889-9e04-0297897b4cad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.151 243708 DEBUG nova.network.neutron [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.194 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:17:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Dec 13 04:17:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Dec 13 04:17:52 compute-0 ceph-mon[75071]: pgmap v1317: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 24 KiB/s wr, 75 op/s
Dec 13 04:17:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-de4743025daf55588a59a4942a0cef5f91bb9d9aece6666e66a1dcb9362223c9-merged.mount: Deactivated successfully.
Dec 13 04:17:52 compute-0 podman[265622]: 2025-12-13 04:17:52.36649271 +0000 UTC m=+2.010180351 container remove fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:17:52 compute-0 systemd[1]: libpod-conmon-fb2abe9e9b4ed9495949f493bae40717e6565f43953660dedeeb023a3a200d0e.scope: Deactivated successfully.
Dec 13 04:17:52 compute-0 sudo[265542]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:17:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:17:52 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007616864864098231 of space, bias 1.0, pg target 0.22850594592294693 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011118037894561265 of space, bias 1.0, pg target 0.33354113683683795 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.367806836730794e-06 of space, bias 1.0, pg target 0.0007103420510192382 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665917061390088 of space, bias 1.0, pg target 0.19997751184170265 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1829020510181253e-06 of space, bias 4.0, pg target 0.0014194824612217504 quantized to 16 (current 16)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:17:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.479 243708 DEBUG nova.network.neutron [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:52 compute-0 sudo[265734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:17:52 compute-0 sudo[265734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.490 243708 DEBUG oslo_concurrency.lockutils [req-b8a370f2-d818-46e0-ba7c-050f3bec6618 req-cbe590e3-5d4a-41ce-b4f2-5274ee50aa01 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.491 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.491 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:17:52 compute-0 sudo[265734]: pam_unix(sudo:session): session closed for user root
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.602 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.639 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:52 compute-0 nova_compute[243704]: 2025-12-13 04:17:52.784 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.160 243708 DEBUG nova.network.neutron [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.175 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.175 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance network_info: |[{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.178 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Start _get_guest_xml network_info=[{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-2537ca78-db6c-4c72-bb72-81e5382d8879', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '2537ca78-db6c-4c72-bb72-81e5382d8879', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': '5963e695-3cc7-4994-977e-b08fa7a682a1', 'attached_at': '', 'detached_at': '', 'volume_id': '2537ca78-db6c-4c72-bb72-81e5382d8879', 'serial': '2537ca78-db6c-4c72-bb72-81e5382d8879'}, 'disk_bus': 'virtio', 'attachment_id': 'd1525d54-822e-47d6-bc89-9f68acb8188e', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.184 243708 WARNING nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.189 243708 DEBUG nova.virt.libvirt.host [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.190 243708 DEBUG nova.virt.libvirt.host [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.193 243708 DEBUG nova.virt.libvirt.host [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.194 243708 DEBUG nova.virt.libvirt.host [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.194 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.194 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.195 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.195 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.195 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.196 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.196 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.196 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.196 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.197 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.197 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.197 243708 DEBUG nova.virt.hardware [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.226 243708 DEBUG nova.storage.rbd_utils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.230 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Dec 13 04:17:53 compute-0 ceph-mon[75071]: osdmap e303: 3 total, 3 up, 3 in
Dec 13 04:17:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:17:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Dec 13 04:17:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Dec 13 04:17:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 28 KiB/s wr, 89 op/s
Dec 13 04:17:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:17:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4033780222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.822 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.850 243708 DEBUG nova.virt.libvirt.vif [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2070020833',display_name='tempest-TestVolumeBootPattern-server-2070020833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2070020833',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-xj9k696q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:49Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=5963e695-3cc7-4994-977e-b08fa7a682a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.850 243708 DEBUG nova.network.os_vif_util [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.851 243708 DEBUG nova.network.os_vif_util [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.852 243708 DEBUG nova.objects.instance [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5963e695-3cc7-4994-977e-b08fa7a682a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.862 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <uuid>5963e695-3cc7-4994-977e-b08fa7a682a1</uuid>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <name>instance-00000011</name>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-server-2070020833</nova:name>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:17:53</nova:creationTime>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <nova:port uuid="ad72d283-b1a5-4889-9e04-0297897b4cad">
Dec 13 04:17:53 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <system>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="serial">5963e695-3cc7-4994-977e-b08fa7a682a1</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="uuid">5963e695-3cc7-4994-977e-b08fa7a682a1</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </system>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <os>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </os>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <features>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </features>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config">
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-2537ca78-db6c-4c72-bb72-81e5382d8879">
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </source>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:17:53 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <serial>2537ca78-db6c-4c72-bb72-81e5382d8879</serial>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:d3:ad:da"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <target dev="tapad72d283-b1"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/console.log" append="off"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <video>
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </video>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:17:53 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:17:53 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:17:53 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:17:53 compute-0 nova_compute[243704]: </domain>
Dec 13 04:17:53 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.862 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Preparing to wait for external event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.862 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.862 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.863 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.863 243708 DEBUG nova.virt.libvirt.vif [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2070020833',display_name='tempest-TestVolumeBootPattern-server-2070020833',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2070020833',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-xj9k696q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:17:49Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=5963e695-3cc7-4994-977e-b08fa7a682a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.864 243708 DEBUG nova.network.os_vif_util [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.864 243708 DEBUG nova.network.os_vif_util [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.864 243708 DEBUG os_vif [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.865 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.865 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.866 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.869 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.869 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad72d283-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.869 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad72d283-b1, col_values=(('external_ids', {'iface-id': 'ad72d283-b1a5-4889-9e04-0297897b4cad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:ad:da', 'vm-uuid': '5963e695-3cc7-4994-977e-b08fa7a682a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.871 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:53 compute-0 NetworkManager[48899]: <info>  [1765599473.8720] manager: (tapad72d283-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.873 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.881 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.882 243708 INFO os_vif [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1')
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.930 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.930 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.930 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:d3:ad:da, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.930 243708 INFO nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Using config drive
Dec 13 04:17:53 compute-0 nova_compute[243704]: 2025-12-13 04:17:53.946 243708 DEBUG nova.storage.rbd_utils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.245 243708 INFO nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Creating config drive at /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.250 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8u_a14n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.378 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz8u_a14n" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.507 243708 DEBUG nova.storage.rbd_utils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image 5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.510 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config 5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:17:54 compute-0 ceph-mon[75071]: osdmap e304: 3 total, 3 up, 3 in
Dec 13 04:17:54 compute-0 ceph-mon[75071]: pgmap v1320: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 67 KiB/s rd, 28 KiB/s wr, 89 op/s
Dec 13 04:17:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4033780222' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.858 243708 DEBUG oslo_concurrency.processutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config 5963e695-3cc7-4994-977e-b08fa7a682a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.859 243708 INFO nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Deleting local config drive /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1/disk.config because it was imported into RBD.
Dec 13 04:17:54 compute-0 kernel: tapad72d283-b1: entered promiscuous mode
Dec 13 04:17:54 compute-0 NetworkManager[48899]: <info>  [1765599474.9216] manager: (tapad72d283-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Dec 13 04:17:54 compute-0 ovn_controller[145204]: 2025-12-13T04:17:54Z|00166|binding|INFO|Claiming lport ad72d283-b1a5-4889-9e04-0297897b4cad for this chassis.
Dec 13 04:17:54 compute-0 ovn_controller[145204]: 2025-12-13T04:17:54Z|00167|binding|INFO|ad72d283-b1a5-4889-9e04-0297897b4cad: Claiming fa:16:3e:d3:ad:da 10.100.0.10
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.923 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.930 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:da 10.100.0.10'], port_security=['fa:16:3e:d3:ad:da 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5963e695-3cc7-4994-977e-b08fa7a682a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=ad72d283-b1a5-4889-9e04-0297897b4cad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.932 154842 INFO neutron.agent.ovn.metadata.agent [-] Port ad72d283-b1a5-4889-9e04-0297897b4cad in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.934 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:54 compute-0 ovn_controller[145204]: 2025-12-13T04:17:54Z|00168|binding|INFO|Setting lport ad72d283-b1a5-4889-9e04-0297897b4cad up in Southbound
Dec 13 04:17:54 compute-0 ovn_controller[145204]: 2025-12-13T04:17:54Z|00169|binding|INFO|Setting lport ad72d283-b1a5-4889-9e04-0297897b4cad ovn-installed in OVS
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.945 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:54 compute-0 nova_compute[243704]: 2025-12-13 04:17:54.947 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.950 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbf3aec-3b56-464e-9947-1fd0d612f9fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.951 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfc553cd2-51 in ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.953 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfc553cd2-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.953 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7d790c09-347c-4164-a7e3-e145367fd0ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.955 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e304d187-fe28-418f-bb07-b2ca509fca02]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:54 compute-0 systemd-machined[206767]: New machine qemu-17-instance-00000011.
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.969 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[7740018b-29e4-4d99-b8f4-3bfada45bed5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:54 compute-0 systemd[1]: Started Virtual Machine qemu-17-instance-00000011.
Dec 13 04:17:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:54.997 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e45c724f-3c78-4108-ae65-8d8008574c72]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:54 compute-0 systemd-udevd[265876]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:17:55 compute-0 NetworkManager[48899]: <info>  [1765599475.0143] device (tapad72d283-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:17:55 compute-0 NetworkManager[48899]: <info>  [1765599475.0154] device (tapad72d283-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.033 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c7dac58f-fe79-4cc8-81f2-310a4060aac8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 NetworkManager[48899]: <info>  [1765599475.0401] manager: (tapfc553cd2-50): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.039 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2b2bc55c-f691-4c47-ab52-729ebc1fd523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.068 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e89e9d-ce30-4f63-b66a-303f5656062b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.071 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[54fd067a-d43f-438b-9aa2-1e6a21204e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 NetworkManager[48899]: <info>  [1765599475.0954] device (tapfc553cd2-50): carrier: link connected
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.096 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[adf2d075-f334-4ef2-b8e7-b93b29fb6cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.111 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[733f7a56-ace7-4ed3-9515-85dee47993d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421377, 'reachable_time': 22066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 265906, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.128 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1891722b-7968-405f-a8ba-eae0cc02445c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe53:ae9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421377, 'tstamp': 421377}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 265907, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.145 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cc08a08c-5c45-470e-a4ea-6bc21103c85a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421377, 'reachable_time': 22066, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 265908, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.177 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdaadfe-4ded-4c2a-9f46-ff73d55a8981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.234 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a212f2d3-c499-4478-abf9-d403bff75a5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.236 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.236 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.237 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.238 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:55 compute-0 NetworkManager[48899]: <info>  [1765599475.2396] manager: (tapfc553cd2-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Dec 13 04:17:55 compute-0 kernel: tapfc553cd2-50: entered promiscuous mode
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.242 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.243 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.244 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:55 compute-0 ovn_controller[145204]: 2025-12-13T04:17:55Z|00170|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.264 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.265 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.267 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a8ac82-6d57-4734-995b-9f2aa6a7fa21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.268 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.pid.haproxy
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:17:55 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:55.269 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'env', 'PROCESS_TAG=haproxy-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.486 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599475.4859724, 5963e695-3cc7-4994-977e-b08fa7a682a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.487 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] VM Started (Lifecycle Event)
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.503 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.507 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599475.493362, 5963e695-3cc7-4994-977e-b08fa7a682a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.507 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] VM Paused (Lifecycle Event)
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.516 243708 DEBUG nova.compute.manager [req-f66314ac-a80e-4661-acae-4310322a2344 req-e5c18e94-f2b2-4199-9819-8c39b8a68010 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.517 243708 DEBUG oslo_concurrency.lockutils [req-f66314ac-a80e-4661-acae-4310322a2344 req-e5c18e94-f2b2-4199-9819-8c39b8a68010 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.517 243708 DEBUG oslo_concurrency.lockutils [req-f66314ac-a80e-4661-acae-4310322a2344 req-e5c18e94-f2b2-4199-9819-8c39b8a68010 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.517 243708 DEBUG oslo_concurrency.lockutils [req-f66314ac-a80e-4661-acae-4310322a2344 req-e5c18e94-f2b2-4199-9819-8c39b8a68010 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.517 243708 DEBUG nova.compute.manager [req-f66314ac-a80e-4661-acae-4310322a2344 req-e5c18e94-f2b2-4199-9819-8c39b8a68010 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Processing event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.518 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.521 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.524 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.526 243708 INFO nova.virt.libvirt.driver [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance spawned successfully.
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.527 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.529 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599475.5208795, 5963e695-3cc7-4994-977e-b08fa7a682a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.530 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] VM Resumed (Lifecycle Event)
Dec 13 04:17:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1321: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 KiB/s wr, 64 op/s
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.550 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.556 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.560 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.560 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.560 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.561 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.561 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.563 243708 DEBUG nova.virt.libvirt.driver [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:17:55 compute-0 nova_compute[243704]: 2025-12-13 04:17:55.593 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:17:55 compute-0 podman[265980]: 2025-12-13 04:17:55.61486962 +0000 UTC m=+0.025842751 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:17:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Dec 13 04:17:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Dec 13 04:17:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Dec 13 04:17:56 compute-0 nova_compute[243704]: 2025-12-13 04:17:56.131 243708 INFO nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Took 5.28 seconds to spawn the instance on the hypervisor.
Dec 13 04:17:56 compute-0 nova_compute[243704]: 2025-12-13 04:17:56.131 243708 DEBUG nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:56 compute-0 nova_compute[243704]: 2025-12-13 04:17:56.279 243708 INFO nova.compute.manager [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Took 7.97 seconds to build instance.
Dec 13 04:17:56 compute-0 nova_compute[243704]: 2025-12-13 04:17:56.366 243708 DEBUG oslo_concurrency.lockutils [None req-98a96835-0287-495f-a669-9e96b14fefc3 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:17:56 compute-0 podman[265980]: 2025-12-13 04:17:56.730173236 +0000 UTC m=+1.141146347 container create 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:17:57 compute-0 ceph-mon[75071]: pgmap v1321: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 KiB/s wr, 64 op/s
Dec 13 04:17:57 compute-0 ceph-mon[75071]: osdmap e305: 3 total, 3 up, 3 in
Dec 13 04:17:57 compute-0 systemd[1]: Started libpod-conmon-1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6.scope.
Dec 13 04:17:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:17:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b17eccb33bcc7fe971c2cdc45e735b7a7c9727fc3cdfb4c2cf16585bc1f5ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:17:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1323: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 KiB/s wr, 64 op/s
Dec 13 04:17:57 compute-0 podman[265980]: 2025-12-13 04:17:57.56410358 +0000 UTC m=+1.975076771 container init 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec 13 04:17:57 compute-0 podman[265980]: 2025-12-13 04:17:57.571981994 +0000 UTC m=+1.982955105 container start 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:17:57 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [NOTICE]   (265999) : New worker (266001) forked
Dec 13 04:17:57 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [NOTICE]   (265999) : Loading success.
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.610 243708 DEBUG nova.compute.manager [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.610 243708 DEBUG oslo_concurrency.lockutils [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.610 243708 DEBUG oslo_concurrency.lockutils [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.611 243708 DEBUG oslo_concurrency.lockutils [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.611 243708 DEBUG nova.compute.manager [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] No waiting events found dispatching network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.611 243708 WARNING nova.compute.manager [req-1c838921-f40b-4899-afd4-08817529ed75 req-fb202e78-55fc-4213-ba53-9dff08e8c67c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received unexpected event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad for instance with vm_state active and task_state None.
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.641 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.746 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599462.7455952, 60ab3a2a-719a-47b1-b774-e518b4039ca5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.746 243708 INFO nova.compute.manager [-] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] VM Stopped (Lifecycle Event)
Dec 13 04:17:57 compute-0 nova_compute[243704]: 2025-12-13 04:17:57.764 243708 DEBUG nova.compute.manager [None req-765637dc-82d2-4fb1-8b94-810ec109c65f - - - - - -] [instance: 60ab3a2a-719a-47b1-b774-e518b4039ca5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:17:58 compute-0 ceph-mon[75071]: pgmap v1323: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 1.8 KiB/s wr, 64 op/s
Dec 13 04:17:58 compute-0 nova_compute[243704]: 2025-12-13 04:17:58.872 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:59.414 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:17:59 compute-0 nova_compute[243704]: 2025-12-13 04:17:59.414 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:17:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:17:59.417 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:17:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1324: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 23 KiB/s wr, 206 op/s
Dec 13 04:17:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Dec 13 04:17:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Dec 13 04:17:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Dec 13 04:18:00 compute-0 nova_compute[243704]: 2025-12-13 04:18:00.103 243708 DEBUG nova.compute.manager [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:00 compute-0 nova_compute[243704]: 2025-12-13 04:18:00.103 243708 DEBUG nova.compute.manager [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing instance network info cache due to event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:18:00 compute-0 nova_compute[243704]: 2025-12-13 04:18:00.103 243708 DEBUG oslo_concurrency.lockutils [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:18:00 compute-0 nova_compute[243704]: 2025-12-13 04:18:00.104 243708 DEBUG oslo_concurrency.lockutils [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:18:00 compute-0 nova_compute[243704]: 2025-12-13 04:18:00.104 243708 DEBUG nova.network.neutron [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing network info cache for port ad72d283-b1a5-4889-9e04-0297897b4cad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:18:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Dec 13 04:18:00 compute-0 ceph-mon[75071]: pgmap v1324: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.2 MiB/s rd, 23 KiB/s wr, 206 op/s
Dec 13 04:18:00 compute-0 ceph-mon[75071]: osdmap e306: 3 total, 3 up, 3 in
Dec 13 04:18:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Dec 13 04:18:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Dec 13 04:18:00 compute-0 podman[266010]: 2025-12-13 04:18:00.977646318 +0000 UTC m=+0.127972003 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec 13 04:18:01 compute-0 nova_compute[243704]: 2025-12-13 04:18:01.356 243708 DEBUG nova.network.neutron [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updated VIF entry in instance network info cache for port ad72d283-b1a5-4889-9e04-0297897b4cad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:18:01 compute-0 nova_compute[243704]: 2025-12-13 04:18:01.357 243708 DEBUG nova.network.neutron [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:18:01 compute-0 nova_compute[243704]: 2025-12-13 04:18:01.380 243708 DEBUG oslo_concurrency.lockutils [req-cfce528e-4179-4f0e-b02a-2e8c2e9aabf3 req-050c8f0e-839d-4709-8075-91caed340cd7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:18:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:01.419 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 184 op/s
Dec 13 04:18:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e307 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Dec 13 04:18:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Dec 13 04:18:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Dec 13 04:18:01 compute-0 ceph-mon[75071]: osdmap e307: 3 total, 3 up, 3 in
Dec 13 04:18:01 compute-0 ceph-mon[75071]: osdmap e308: 3 total, 3 up, 3 in
Dec 13 04:18:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3677897679' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3677897679' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.268 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.269 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.283 243708 DEBUG nova.objects.instance [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.303 243708 INFO nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Ignoring supplied device name: /dev/vdb
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.314 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.500 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.501 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.501 243708 INFO nova.compute.manager [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Attaching volume 9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4 to /dev/vdb
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.643 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.670 243708 DEBUG os_brick.utils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.671 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.686 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.686 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6157c4da-8bd4-4671-9aed-3745e53b8686]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.688 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.698 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.699 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[47b83780-698a-48c5-8c2b-4c410bbd2e5d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.701 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.711 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.712 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[8b26dcf0-f45e-452f-869e-2edfebdf752c]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.713 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[33d5f83d-b3e5-4d96-b24b-291343c5b55f]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.714 243708 DEBUG oslo_concurrency.processutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.735 243708 DEBUG oslo_concurrency.processutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "nvme version" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.738 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.739 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.739 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.739 243708 DEBUG os_brick.utils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] <== get_connector_properties: return (69ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:18:02 compute-0 nova_compute[243704]: 2025-12-13 04:18:02.740 243708 DEBUG nova.virt.block_device [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating existing volume attachment record: 1985f37b-f410-4390-9091-131b5e79cf0f _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:18:02 compute-0 ceph-mon[75071]: pgmap v1327: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 184 op/s
Dec 13 04:18:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3677897679' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3677897679' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1284225350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1329: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 184 op/s
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.774 243708 DEBUG nova.objects.instance [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.799 243708 DEBUG nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Attempting to attach volume 9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.802 243708 DEBUG nova.virt.libvirt.guest [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:18:03 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4">
Dec 13 04:18:03 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   </source>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:18:03 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:18:03 compute-0 nova_compute[243704]:   <serial>9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4</serial>
Dec 13 04:18:03 compute-0 nova_compute[243704]: </disk>
Dec 13 04:18:03 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:18:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1284225350' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.875 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.936 243708 DEBUG nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.937 243708 DEBUG nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.937 243708 DEBUG nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:18:03 compute-0 nova_compute[243704]: 2025-12-13 04:18:03.937 243708 DEBUG nova.virt.libvirt.driver [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] No VIF found with MAC fa:16:3e:00:dd:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:18:04 compute-0 nova_compute[243704]: 2025-12-13 04:18:04.395 243708 DEBUG oslo_concurrency.lockutils [None req-c3963f07-8fe8-4eb6-802c-f8e2300a8814 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.894s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:04 compute-0 ceph-mon[75071]: pgmap v1329: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 184 op/s
Dec 13 04:18:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 86 op/s
Dec 13 04:18:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2978541388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2978541388' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Dec 13 04:18:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Dec 13 04:18:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Dec 13 04:18:07 compute-0 ceph-mon[75071]: pgmap v1330: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 86 op/s
Dec 13 04:18:07 compute-0 ceph-mon[75071]: osdmap e309: 3 total, 3 up, 3 in
Dec 13 04:18:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1332: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 77 op/s
Dec 13 04:18:07 compute-0 nova_compute[243704]: 2025-12-13 04:18:07.645 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Dec 13 04:18:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Dec 13 04:18:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Dec 13 04:18:08 compute-0 ovn_controller[145204]: 2025-12-13T04:18:08Z|00034|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Dec 13 04:18:08 compute-0 ovn_controller[145204]: 2025-12-13T04:18:08Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d3:ad:da 10.100.0.10
Dec 13 04:18:08 compute-0 ceph-mon[75071]: pgmap v1332: 305 pgs: 305 active+clean; 246 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 59 KiB/s rd, 4.5 KiB/s wr, 77 op/s
Dec 13 04:18:08 compute-0 ceph-mon[75071]: osdmap e310: 3 total, 3 up, 3 in
Dec 13 04:18:08 compute-0 nova_compute[243704]: 2025-12-13 04:18:08.878 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1334: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 879 KiB/s rd, 28 KiB/s wr, 158 op/s
Dec 13 04:18:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3547577733' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2849746877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Dec 13 04:18:10 compute-0 ceph-mon[75071]: pgmap v1334: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 879 KiB/s rd, 28 KiB/s wr, 158 op/s
Dec 13 04:18:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3547577733' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2849746877' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Dec 13 04:18:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Dec 13 04:18:10 compute-0 podman[266064]: 2025-12-13 04:18:10.929007575 +0000 UTC m=+0.072532983 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:18:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 31 KiB/s wr, 120 op/s
Dec 13 04:18:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Dec 13 04:18:11 compute-0 ceph-mon[75071]: osdmap e311: 3 total, 3 up, 3 in
Dec 13 04:18:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Dec 13 04:18:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Dec 13 04:18:12 compute-0 ovn_controller[145204]: 2025-12-13T04:18:12Z|00036|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.10
Dec 13 04:18:12 compute-0 ovn_controller[145204]: 2025-12-13T04:18:12Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:d3:ad:da 10.100.0.10
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:12 compute-0 nova_compute[243704]: 2025-12-13 04:18:12.648 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Dec 13 04:18:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Dec 13 04:18:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Dec 13 04:18:12 compute-0 ceph-mon[75071]: pgmap v1336: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 31 KiB/s wr, 120 op/s
Dec 13 04:18:12 compute-0 ceph-mon[75071]: osdmap e312: 3 total, 3 up, 3 in
Dec 13 04:18:13 compute-0 ovn_controller[145204]: 2025-12-13T04:18:13Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:ad:da 10.100.0.10
Dec 13 04:18:13 compute-0 ovn_controller[145204]: 2025-12-13T04:18:13Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:ad:da 10.100.0.10
Dec 13 04:18:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 33 KiB/s wr, 125 op/s
Dec 13 04:18:13 compute-0 ceph-mon[75071]: osdmap e313: 3 total, 3 up, 3 in
Dec 13 04:18:13 compute-0 nova_compute[243704]: 2025-12-13 04:18:13.880 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:13 compute-0 nova_compute[243704]: 2025-12-13 04:18:13.883 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2882730477' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2322649341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e313 do_prune osdmap full prune enabled
Dec 13 04:18:14 compute-0 ceph-mon[75071]: pgmap v1339: 305 pgs: 305 active+clean; 246 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 33 KiB/s wr, 125 op/s
Dec 13 04:18:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2882730477' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2322649341' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e314 e314: 3 total, 3 up, 3 in
Dec 13 04:18:14 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e314: 3 total, 3 up, 3 in
Dec 13 04:18:14 compute-0 nova_compute[243704]: 2025-12-13 04:18:14.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:14 compute-0 nova_compute[243704]: 2025-12-13 04:18:14.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:18:14 compute-0 nova_compute[243704]: 2025-12-13 04:18:14.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:18:15 compute-0 nova_compute[243704]: 2025-12-13 04:18:15.334 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:18:15 compute-0 nova_compute[243704]: 2025-12-13 04:18:15.334 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:18:15 compute-0 nova_compute[243704]: 2025-12-13 04:18:15.335 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:18:15 compute-0 nova_compute[243704]: 2025-12-13 04:18:15.335 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 46 KiB/s wr, 113 op/s
Dec 13 04:18:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e314 do_prune osdmap full prune enabled
Dec 13 04:18:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e315 e315: 3 total, 3 up, 3 in
Dec 13 04:18:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e315: 3 total, 3 up, 3 in
Dec 13 04:18:15 compute-0 ceph-mon[75071]: osdmap e314: 3 total, 3 up, 3 in
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.619 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating instance_info_cache with network_info: [{"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.651 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-a0bacfab-7abb-494c-b56e-2cc236181408" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.652 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.654 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.681 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.681 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.682 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.682 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:18:16 compute-0 nova_compute[243704]: 2025-12-13 04:18:16.682 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e315 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e315 do_prune osdmap full prune enabled
Dec 13 04:18:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e316 e316: 3 total, 3 up, 3 in
Dec 13 04:18:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e316: 3 total, 3 up, 3 in
Dec 13 04:18:16 compute-0 ceph-mon[75071]: pgmap v1341: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 46 KiB/s wr, 113 op/s
Dec 13 04:18:16 compute-0 ceph-mon[75071]: osdmap e315: 3 total, 3 up, 3 in
Dec 13 04:18:16 compute-0 ceph-mon[75071]: osdmap e316: 3 total, 3 up, 3 in
Dec 13 04:18:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3590995692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.230 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.312 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.312 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.317 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.317 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.318 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:18:17 compute-0 podman[266106]: 2025-12-13 04:18:17.324768983 +0000 UTC m=+0.048317737 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.484 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.486 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=59.94243029411882GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.486 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.486 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 47 KiB/s wr, 114 op/s
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.562 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance a0bacfab-7abb-494c-b56e-2cc236181408 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.562 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 5963e695-3cc7-4994-977e-b08fa7a682a1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.562 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.562 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.615 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:17 compute-0 nova_compute[243704]: 2025-12-13 04:18:17.650 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e316 do_prune osdmap full prune enabled
Dec 13 04:18:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3590995692' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e317 e317: 3 total, 3 up, 3 in
Dec 13 04:18:17 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e317: 3 total, 3 up, 3 in
Dec 13 04:18:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625813709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3625813709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641886665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.233 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.239 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.259 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.293 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.294 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.517 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.518 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.518 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.518 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e317 do_prune osdmap full prune enabled
Dec 13 04:18:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e318 e318: 3 total, 3 up, 3 in
Dec 13 04:18:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e318: 3 total, 3 up, 3 in
Dec 13 04:18:18 compute-0 ceph-mon[75071]: pgmap v1344: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 47 KiB/s wr, 114 op/s
Dec 13 04:18:18 compute-0 nova_compute[243704]: 2025-12-13 04:18:18.882 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:18 compute-0 ceph-mon[75071]: osdmap e317: 3 total, 3 up, 3 in
Dec 13 04:18:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3625813709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3625813709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/641886665' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 18 KiB/s wr, 193 op/s
Dec 13 04:18:19 compute-0 nova_compute[243704]: 2025-12-13 04:18:19.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:19 compute-0 ceph-mon[75071]: osdmap e318: 3 total, 3 up, 3 in
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.072 243708 DEBUG oslo_concurrency.lockutils [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.073 243708 DEBUG oslo_concurrency.lockutils [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.085 243708 INFO nova.compute.manager [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Detaching volume 9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.206 243708 INFO nova.virt.block_device [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Attempting to driver detach volume 9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4 from mountpoint /dev/vdb
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.216 243708 DEBUG nova.virt.libvirt.driver [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Attempting to detach device vdb from instance a0bacfab-7abb-494c-b56e-2cc236181408 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.217 243708 DEBUG nova.virt.libvirt.guest [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4">
Dec 13 04:18:20 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   </source>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <serial>9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4</serial>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]: </disk>
Dec 13 04:18:20 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.224 243708 INFO nova.virt.libvirt.driver [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully detached device vdb from instance a0bacfab-7abb-494c-b56e-2cc236181408 from the persistent domain config.
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.225 243708 DEBUG nova.virt.libvirt.driver [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance a0bacfab-7abb-494c-b56e-2cc236181408 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.225 243708 DEBUG nova.virt.libvirt.guest [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4">
Dec 13 04:18:20 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   </source>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <serial>9e06fad2-e81f-4385-8fe7-4b76e4f5e7f4</serial>
Dec 13 04:18:20 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:18:20 compute-0 nova_compute[243704]: </disk>
Dec 13 04:18:20 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.346 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599500.3459914, a0bacfab-7abb-494c-b56e-2cc236181408 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.348 243708 DEBUG nova.virt.libvirt.driver [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance a0bacfab-7abb-494c-b56e-2cc236181408 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.351 243708 INFO nova.virt.libvirt.driver [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully detached device vdb from instance a0bacfab-7abb-494c-b56e-2cc236181408 from the live domain config.
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.532 243708 DEBUG nova.objects.instance [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'flavor' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.564 243708 DEBUG oslo_concurrency.lockutils [None req-a6d06e59-7783-49eb-95ce-b3a0ea1b9d31 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.491s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:20 compute-0 nova_compute[243704]: 2025-12-13 04:18:20.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:18:20 compute-0 ceph-mon[75071]: pgmap v1347: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 143 KiB/s rd, 18 KiB/s wr, 193 op/s
Dec 13 04:18:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4167330453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.408 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.409 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.409 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.409 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.410 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.411 243708 INFO nova.compute.manager [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Terminating instance
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.412 243708 DEBUG nova.compute.manager [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:18:21 compute-0 kernel: tape455bcda-3f (unregistering): left promiscuous mode
Dec 13 04:18:21 compute-0 NetworkManager[48899]: <info>  [1765599501.4566] device (tape455bcda-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:18:21 compute-0 ovn_controller[145204]: 2025-12-13T04:18:21Z|00171|binding|INFO|Releasing lport e455bcda-3fde-4820-991d-0f44c010bb03 from this chassis (sb_readonly=0)
Dec 13 04:18:21 compute-0 ovn_controller[145204]: 2025-12-13T04:18:21Z|00172|binding|INFO|Setting lport e455bcda-3fde-4820-991d-0f44c010bb03 down in Southbound
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.470 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 ovn_controller[145204]: 2025-12-13T04:18:21Z|00173|binding|INFO|Removing iface tape455bcda-3f ovn-installed in OVS
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.477 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:00:dd:ba 10.100.0.14'], port_security=['fa:16:3e:00:dd:ba 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a0bacfab-7abb-494c-b56e-2cc236181408', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f5e5c975dd8b4a088c217b330c95ba7b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '703be203-b6f5-4566-a488-3bb21d810094', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f8076cdc-415f-401f-a0fe-b3be303ae9cf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=e455bcda-3fde-4820-991d-0f44c010bb03) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.478 154842 INFO neutron.agent.ovn.metadata.agent [-] Port e455bcda-3fde-4820-991d-0f44c010bb03 in datapath bfdc82ee-37dc-4f9b-b711-c6c9f87b443a unbound from our chassis
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.480 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.481 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[cc65ff07-e0f1-4692-b4cf-da177504a8a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.481 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a namespace which is not needed anymore
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.493 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Deactivated successfully.
Dec 13 04:18:21 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d00000010.scope: Consumed 15.045s CPU time.
Dec 13 04:18:21 compute-0 systemd-machined[206767]: Machine qemu-16-instance-00000010 terminated.
Dec 13 04:18:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 12 KiB/s wr, 135 op/s
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [NOTICE]   (264921) : haproxy version is 2.8.14-c23fe91
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [NOTICE]   (264921) : path to executable is /usr/sbin/haproxy
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [WARNING]  (264921) : Exiting Master process...
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [WARNING]  (264921) : Exiting Master process...
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [ALERT]    (264921) : Current worker (264923) exited with code 143 (Terminated)
Dec 13 04:18:21 compute-0 neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a[264917]: [WARNING]  (264921) : All workers exited. Exiting... (0)
Dec 13 04:18:21 compute-0 systemd[1]: libpod-479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16.scope: Deactivated successfully.
Dec 13 04:18:21 compute-0 podman[266173]: 2025-12-13 04:18:21.606326748 +0000 UTC m=+0.041105913 container died 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16-userdata-shm.mount: Deactivated successfully.
Dec 13 04:18:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd0bdf963fe28dff2050ce0eff2be92f2a19d9a21dd03fc629f932c3f988d3c9-merged.mount: Deactivated successfully.
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.635 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.639 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.647 243708 INFO nova.virt.libvirt.driver [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Instance destroyed successfully.
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.647 243708 DEBUG nova.objects.instance [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lazy-loading 'resources' on Instance uuid a0bacfab-7abb-494c-b56e-2cc236181408 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:21 compute-0 podman[266173]: 2025-12-13 04:18:21.652331952 +0000 UTC m=+0.087111117 container cleanup 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:18:21 compute-0 systemd[1]: libpod-conmon-479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16.scope: Deactivated successfully.
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.668 243708 DEBUG nova.virt.libvirt.vif [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:17:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesBackupsTest-instance-1933348177',display_name='tempest-VolumesBackupsTest-instance-1933348177',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumesbackupstest-instance-1933348177',id=16,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGu7BT+OlkAXS/5E/R+6mxin4StaJf2AYSa4spXG7EaUPd9zPGdGfIZa8P9sKks4ofV1Bj7ayP2qcemd21rm9iUb4Gw5NQPAIiD+VTs+KWu3lqFLlObvGeCTydEwHUAP1g==',key_name='tempest-keypair-344651295',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:17:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f5e5c975dd8b4a088c217b330c95ba7b',ramdisk_id='',reservation_id='r-di5t03jv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-951676606',owner_user_name='tempest-VolumesBackupsTest-951676606-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:17:27Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='11e9a1a42b4b4d679693155d71445247',uuid=a0bacfab-7abb-494c-b56e-2cc236181408,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.669 243708 DEBUG nova.network.os_vif_util [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converting VIF {"id": "e455bcda-3fde-4820-991d-0f44c010bb03", "address": "fa:16:3e:00:dd:ba", "network": {"id": "bfdc82ee-37dc-4f9b-b711-c6c9f87b443a", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1774274488-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f5e5c975dd8b4a088c217b330c95ba7b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape455bcda-3f", "ovs_interfaceid": "e455bcda-3fde-4820-991d-0f44c010bb03", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.669 243708 DEBUG nova.network.os_vif_util [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.670 243708 DEBUG os_vif [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.672 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.672 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape455bcda-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.674 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.676 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.678 243708 INFO os_vif [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:00:dd:ba,bridge_name='br-int',has_traffic_filtering=True,id=e455bcda-3fde-4820-991d-0f44c010bb03,network=Network(bfdc82ee-37dc-4f9b-b711-c6c9f87b443a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape455bcda-3f')
Dec 13 04:18:21 compute-0 podman[266210]: 2025-12-13 04:18:21.718240585 +0000 UTC m=+0.039857390 container remove 479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec 13 04:18:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e318 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.723 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3d106bdb-04c5-4a0e-aa32-ceeaeb6e511c]: (4, ('Sat Dec 13 04:18:21 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a (479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16)\n479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16\nSat Dec 13 04:18:21 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a (479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16)\n479c7923914e5222e4f4c2f6afb95d8859080947bf82f699ba6505cda0451e16\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e318 do_prune osdmap full prune enabled
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.725 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2adebd2f-18cf-45ff-bb4e-c0bfd8c89275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.726 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbfdc82ee-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e319 e319: 3 total, 3 up, 3 in
Dec 13 04:18:21 compute-0 kernel: tapbfdc82ee-30: left promiscuous mode
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.732 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.733 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[201bbf2a-5285-4c2e-b81e-b6fc87229482]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e319: 3 total, 3 up, 3 in
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.748 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.753 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[186c7b8a-35ed-46fc-ad96-9f11fbbb7b0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.754 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[af265a8d-7b3f-4f1d-9ceb-ac969823437e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.769 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[40e35c44-8474-4c0b-a9ef-c4991f6de555]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418561, 'reachable_time': 37607, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266243, 'error': None, 'target': 'ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 systemd[1]: run-netns-ovnmeta\x2dbfdc82ee\x2d37dc\x2d4f9b\x2db711\x2dc6c9f87b443a.mount: Deactivated successfully.
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.774 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bfdc82ee-37dc-4f9b-b711-c6c9f87b443a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:18:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:21.774 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[a070c3c3-767c-4947-a11d-466cfb9b59cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4167330453' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:21 compute-0 ceph-mon[75071]: osdmap e319: 3 total, 3 up, 3 in
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.927 243708 INFO nova.virt.libvirt.driver [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Deleting instance files /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408_del
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.928 243708 INFO nova.virt.libvirt.driver [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Deletion of /var/lib/nova/instances/a0bacfab-7abb-494c-b56e-2cc236181408_del complete
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.977 243708 INFO nova.compute.manager [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Took 0.56 seconds to destroy the instance on the hypervisor.
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.980 243708 DEBUG oslo.service.loopingcall [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.980 243708 DEBUG nova.compute.manager [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:18:21 compute-0 nova_compute[243704]: 2025-12-13 04:18:21.980 243708 DEBUG nova.network.neutron [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:18:22 compute-0 nova_compute[243704]: 2025-12-13 04:18:22.652 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e319 do_prune osdmap full prune enabled
Dec 13 04:18:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e320 e320: 3 total, 3 up, 3 in
Dec 13 04:18:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e320: 3 total, 3 up, 3 in
Dec 13 04:18:22 compute-0 ceph-mon[75071]: pgmap v1348: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 12 KiB/s wr, 135 op/s
Dec 13 04:18:22 compute-0 ceph-mon[75071]: osdmap e320: 3 total, 3 up, 3 in
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.132 243708 DEBUG nova.compute.manager [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-unplugged-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.132 243708 DEBUG oslo_concurrency.lockutils [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.132 243708 DEBUG oslo_concurrency.lockutils [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.132 243708 DEBUG oslo_concurrency.lockutils [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.132 243708 DEBUG nova.compute.manager [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] No waiting events found dispatching network-vif-unplugged-e455bcda-3fde-4820-991d-0f44c010bb03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:18:23 compute-0 nova_compute[243704]: 2025-12-13 04:18:23.133 243708 DEBUG nova.compute.manager [req-a0dd99f4-e21d-4cbe-900f-8d788062c74b req-8e1a0923-109b-4eae-ad67-ff11a5b08a9c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-unplugged-e455bcda-3fde-4820-991d-0f44c010bb03 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:18:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 136 op/s
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.025 243708 DEBUG nova.network.neutron [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.038 243708 INFO nova.compute.manager [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Took 2.06 seconds to deallocate network for instance.
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.084 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.085 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.147 243708 DEBUG oslo_concurrency.processutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4189281042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.712 243708 DEBUG oslo_concurrency.processutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.719 243708 DEBUG nova.compute.provider_tree [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.741 243708 DEBUG nova.scheduler.client.report [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.764 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.788 243708 INFO nova.scheduler.client.report [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Deleted allocations for instance a0bacfab-7abb-494c-b56e-2cc236181408
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.846 243708 DEBUG oslo_concurrency.lockutils [None req-96ea5357-116c-4b54-998a-06b0a4ee1489 11e9a1a42b4b4d679693155d71445247 f5e5c975dd8b4a088c217b330c95ba7b - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.437s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:24 compute-0 nova_compute[243704]: 2025-12-13 04:18:24.871 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:18:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e320 do_prune osdmap full prune enabled
Dec 13 04:18:24 compute-0 ceph-mon[75071]: pgmap v1351: 305 pgs: 305 active+clean; 248 MiB data, 422 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 12 KiB/s wr, 136 op/s
Dec 13 04:18:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4189281042' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e321 e321: 3 total, 3 up, 3 in
Dec 13 04:18:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e321: 3 total, 3 up, 3 in
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.191 243708 DEBUG nova.compute.manager [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.191 243708 DEBUG oslo_concurrency.lockutils [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.191 243708 DEBUG oslo_concurrency.lockutils [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.192 243708 DEBUG oslo_concurrency.lockutils [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a0bacfab-7abb-494c-b56e-2cc236181408-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.192 243708 DEBUG nova.compute.manager [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] No waiting events found dispatching network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.192 243708 WARNING nova.compute.manager [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received unexpected event network-vif-plugged-e455bcda-3fde-4820-991d-0f44c010bb03 for instance with vm_state deleted and task_state None.
Dec 13 04:18:25 compute-0 nova_compute[243704]: 2025-12-13 04:18:25.192 243708 DEBUG nova.compute.manager [req-3663c5bc-b051-479c-bb25-2345da434cb3 req-8ee15f54-56c6-40dc-a448-de956124f5f2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Received event network-vif-deleted-e455bcda-3fde-4820-991d-0f44c010bb03 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1220570674' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1220570674' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.2 KiB/s wr, 121 op/s
Dec 13 04:18:25 compute-0 ceph-mon[75071]: osdmap e321: 3 total, 3 up, 3 in
Dec 13 04:18:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1220570674' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1220570674' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:26 compute-0 nova_compute[243704]: 2025-12-13 04:18:26.676 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e321 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e321 do_prune osdmap full prune enabled
Dec 13 04:18:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e322 e322: 3 total, 3 up, 3 in
Dec 13 04:18:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e322: 3 total, 3 up, 3 in
Dec 13 04:18:26 compute-0 ceph-mon[75071]: pgmap v1353: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 6.2 KiB/s wr, 121 op/s
Dec 13 04:18:26 compute-0 ceph-mon[75071]: osdmap e322: 3 total, 3 up, 3 in
Dec 13 04:18:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 6.4 KiB/s wr, 125 op/s
Dec 13 04:18:27 compute-0 nova_compute[243704]: 2025-12-13 04:18:27.654 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e322 do_prune osdmap full prune enabled
Dec 13 04:18:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e323 e323: 3 total, 3 up, 3 in
Dec 13 04:18:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e323: 3 total, 3 up, 3 in
Dec 13 04:18:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e323 do_prune osdmap full prune enabled
Dec 13 04:18:28 compute-0 ceph-mon[75071]: pgmap v1355: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 88 KiB/s rd, 6.4 KiB/s wr, 125 op/s
Dec 13 04:18:28 compute-0 ceph-mon[75071]: osdmap e323: 3 total, 3 up, 3 in
Dec 13 04:18:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e324 e324: 3 total, 3 up, 3 in
Dec 13 04:18:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e324: 3 total, 3 up, 3 in
Dec 13 04:18:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1358: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 41 KiB/s wr, 179 op/s
Dec 13 04:18:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e324 do_prune osdmap full prune enabled
Dec 13 04:18:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e325 e325: 3 total, 3 up, 3 in
Dec 13 04:18:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e325: 3 total, 3 up, 3 in
Dec 13 04:18:29 compute-0 ceph-mon[75071]: osdmap e324: 3 total, 3 up, 3 in
Dec 13 04:18:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535440710' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/535440710' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:30 compute-0 ceph-mon[75071]: pgmap v1358: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 131 KiB/s rd, 41 KiB/s wr, 179 op/s
Dec 13 04:18:30 compute-0 ceph-mon[75071]: osdmap e325: 3 total, 3 up, 3 in
Dec 13 04:18:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/535440710' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/535440710' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 39 KiB/s wr, 172 op/s
Dec 13 04:18:31 compute-0 nova_compute[243704]: 2025-12-13 04:18:31.679 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e325 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e325 do_prune osdmap full prune enabled
Dec 13 04:18:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e326 e326: 3 total, 3 up, 3 in
Dec 13 04:18:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e326: 3 total, 3 up, 3 in
Dec 13 04:18:31 compute-0 podman[266267]: 2025-12-13 04:18:31.952903074 +0000 UTC m=+0.096710977 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3191382782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3191382782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:32 compute-0 nova_compute[243704]: 2025-12-13 04:18:32.657 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:32 compute-0 ceph-mon[75071]: pgmap v1360: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 126 KiB/s rd, 39 KiB/s wr, 172 op/s
Dec 13 04:18:32 compute-0 ceph-mon[75071]: osdmap e326: 3 total, 3 up, 3 in
Dec 13 04:18:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3191382782' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:32 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3191382782' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e326 do_prune osdmap full prune enabled
Dec 13 04:18:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e327 e327: 3 total, 3 up, 3 in
Dec 13 04:18:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e327: 3 total, 3 up, 3 in
Dec 13 04:18:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1363: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 18 KiB/s wr, 81 op/s
Dec 13 04:18:33 compute-0 ovn_controller[145204]: 2025-12-13T04:18:33Z|00174|binding|INFO|Releasing lport 89e12177-98ba-49d1-8f15-68c87072167e from this chassis (sb_readonly=0)
Dec 13 04:18:33 compute-0 nova_compute[243704]: 2025-12-13 04:18:33.922 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:33 compute-0 ceph-mon[75071]: osdmap e327: 3 total, 3 up, 3 in
Dec 13 04:18:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:35.093 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:35.093 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:35.094 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:35 compute-0 ceph-mon[75071]: pgmap v1363: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 18 KiB/s wr, 81 op/s
Dec 13 04:18:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1364: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 4.0 KiB/s wr, 124 op/s
Dec 13 04:18:35 compute-0 sshd-session[266294]: Received disconnect from 193.46.255.33 port 34958:11:  [preauth]
Dec 13 04:18:35 compute-0 sshd-session[266294]: Disconnected from authenticating user root 193.46.255.33 port 34958 [preauth]
Dec 13 04:18:36 compute-0 ceph-mon[75071]: pgmap v1364: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 4.0 KiB/s wr, 124 op/s
Dec 13 04:18:36 compute-0 nova_compute[243704]: 2025-12-13 04:18:36.645 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599501.6440518, a0bacfab-7abb-494c-b56e-2cc236181408 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:18:36 compute-0 nova_compute[243704]: 2025-12-13 04:18:36.646 243708 INFO nova.compute.manager [-] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] VM Stopped (Lifecycle Event)
Dec 13 04:18:36 compute-0 nova_compute[243704]: 2025-12-13 04:18:36.672 243708 DEBUG nova.compute.manager [None req-ec859c07-dab5-46a7-ba9f-2ea76bb025c7 - - - - - -] [instance: a0bacfab-7abb-494c-b56e-2cc236181408] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:18:36 compute-0 nova_compute[243704]: 2025-12-13 04:18:36.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e327 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e327 do_prune osdmap full prune enabled
Dec 13 04:18:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e328 e328: 3 total, 3 up, 3 in
Dec 13 04:18:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e328: 3 total, 3 up, 3 in
Dec 13 04:18:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1366: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 4.0 KiB/s wr, 124 op/s
Dec 13 04:18:37 compute-0 nova_compute[243704]: 2025-12-13 04:18:37.660 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:37 compute-0 ceph-mon[75071]: osdmap e328: 3 total, 3 up, 3 in
Dec 13 04:18:38 compute-0 ceph-mon[75071]: pgmap v1366: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 506 KiB/s rd, 4.0 KiB/s wr, 124 op/s
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.085 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.085 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.101 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.171 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.172 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.179 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.179 243708 INFO nova.compute.claims [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.320 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1367: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 404 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Dec 13 04:18:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:18:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1400671279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.904 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.911 243708 DEBUG nova.compute.provider_tree [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.925 243708 DEBUG nova.scheduler.client.report [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:18:39 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1400671279' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.945 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.773s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.946 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.993 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:18:39 compute-0 nova_compute[243704]: 2025-12-13 04:18:39.994 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.025 243708 INFO nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.051 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.112 243708 INFO nova.virt.block_device [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Booting with volume 7d331c3e-8259-4a03-a8d4-36a08f54e707 at /dev/vda
Dec 13 04:18:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/628736765' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.345 243708 DEBUG os_brick.utils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.346 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.358 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.358 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[66eefb4c-884e-43a4-b0eb-131b2b103e67]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.360 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.370 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.371 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[79d1fa0b-9662-40b6-bc41-31812eeec387]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.372 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.383 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.384 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[570e1278-44c5-4d48-88d0-9efd4c13c8d4]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.385 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a4ce41da-cc1e-4dbe-9318-cf60216b8950]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.386 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.409 243708 DEBUG nova.policy [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '9b8c4a2342e4420d8140b403edbcba5a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '27927978f9684df1a72cecb32505e93b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.416 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "nvme version" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.420 243708 DEBUG os_brick.initiator.connectors.lightos [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.420 243708 DEBUG os_brick.initiator.connectors.lightos [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.421 243708 DEBUG os_brick.initiator.connectors.lightos [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.422 243708 DEBUG os_brick.utils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:18:40 compute-0 nova_compute[243704]: 2025-12-13 04:18:40.422 243708 DEBUG nova.virt.block_device [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating existing volume attachment record: 0335d22c-b78b-4688-9ecc-f9f5b2822f00 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:18:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:18:40
Dec 13 04:18:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:18:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:18:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'vms', 'default.rgw.meta', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups']
Dec 13 04:18:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:18:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e328 do_prune osdmap full prune enabled
Dec 13 04:18:40 compute-0 ceph-mon[75071]: pgmap v1367: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 404 KiB/s rd, 7.3 KiB/s wr, 117 op/s
Dec 13 04:18:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/628736765' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e329 e329: 3 total, 3 up, 3 in
Dec 13 04:18:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e329: 3 total, 3 up, 3 in
Dec 13 04:18:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3007803889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1369: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 7.1 KiB/s wr, 115 op/s
Dec 13 04:18:41 compute-0 nova_compute[243704]: 2025-12-13 04:18:41.683 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e329 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:41 compute-0 podman[266325]: 2025-12-13 04:18:41.907463067 +0000 UTC m=+0.053690794 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:18:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e329 do_prune osdmap full prune enabled
Dec 13 04:18:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e330 e330: 3 total, 3 up, 3 in
Dec 13 04:18:41 compute-0 ceph-mon[75071]: osdmap e329: 3 total, 3 up, 3 in
Dec 13 04:18:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3007803889' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e330: 3 total, 3 up, 3 in
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.080 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Successfully created port: d83adae9-5340-4b94-ba3b-9b4adc9ac632 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.093 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.095 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.095 243708 INFO nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Creating image(s)
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.095 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.096 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Ensure instance console log exists: /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.096 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.096 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.096 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.662 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3760498057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:18:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.902 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Successfully updated port: d83adae9-5340-4b94-ba3b-9b4adc9ac632 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.928 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.928 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquired lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:18:42 compute-0 nova_compute[243704]: 2025-12-13 04:18:42.929 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:18:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e330 do_prune osdmap full prune enabled
Dec 13 04:18:42 compute-0 ceph-mon[75071]: pgmap v1369: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 395 KiB/s rd, 7.1 KiB/s wr, 115 op/s
Dec 13 04:18:42 compute-0 ceph-mon[75071]: osdmap e330: 3 total, 3 up, 3 in
Dec 13 04:18:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3760498057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e331 e331: 3 total, 3 up, 3 in
Dec 13 04:18:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e331: 3 total, 3 up, 3 in
Dec 13 04:18:43 compute-0 nova_compute[243704]: 2025-12-13 04:18:43.032 243708 DEBUG nova.compute.manager [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:43 compute-0 nova_compute[243704]: 2025-12-13 04:18:43.033 243708 DEBUG nova.compute.manager [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing instance network info cache due to event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:18:43 compute-0 nova_compute[243704]: 2025-12-13 04:18:43.033 243708 DEBUG oslo_concurrency.lockutils [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:18:43 compute-0 nova_compute[243704]: 2025-12-13 04:18:43.103 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:18:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1372: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Dec 13 04:18:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e331 do_prune osdmap full prune enabled
Dec 13 04:18:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e332 e332: 3 total, 3 up, 3 in
Dec 13 04:18:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e332: 3 total, 3 up, 3 in
Dec 13 04:18:44 compute-0 ceph-mon[75071]: osdmap e331: 3 total, 3 up, 3 in
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.573 243708 DEBUG nova.network.neutron [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.591 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Releasing lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.592 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Instance network_info: |[{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.592 243708 DEBUG oslo_concurrency.lockutils [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.592 243708 DEBUG nova.network.neutron [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.599 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Start _get_guest_xml network_info=[{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7d331c3e-8259-4a03-a8d4-36a08f54e707', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7d331c3e-8259-4a03-a8d4-36a08f54e707', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'afb36bc5-8bfe-44dc-8be5-f7a657debc98', 'attached_at': '', 'detached_at': '', 'volume_id': '7d331c3e-8259-4a03-a8d4-36a08f54e707', 'serial': '7d331c3e-8259-4a03-a8d4-36a08f54e707'}, 'disk_bus': 'virtio', 'attachment_id': '0335d22c-b78b-4688-9ecc-f9f5b2822f00', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.606 243708 WARNING nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.614 243708 DEBUG nova.virt.libvirt.host [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.614 243708 DEBUG nova.virt.libvirt.host [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.619 243708 DEBUG nova.virt.libvirt.host [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.619 243708 DEBUG nova.virt.libvirt.host [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.620 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.620 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.620 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.620 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.621 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.621 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.621 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.621 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.621 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.622 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.622 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.622 243708 DEBUG nova.virt.hardware [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.641 243708 DEBUG nova.storage.rbd_utils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:18:44 compute-0 nova_compute[243704]: 2025-12-13 04:18:44.645 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:18:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743147439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:45 compute-0 ceph-mon[75071]: pgmap v1372: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.5 KiB/s wr, 28 op/s
Dec 13 04:18:45 compute-0 ceph-mon[75071]: osdmap e332: 3 total, 3 up, 3 in
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.274 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.309 243708 DEBUG nova.virt.libvirt.vif [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1364564926',display_name='tempest-TestVolumeBootPattern-server-1364564926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1364564926',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-2k8ugr6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:18:40Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=afb36bc5-8bfe-44dc-8be5-f7a657debc98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.310 243708 DEBUG nova.network.os_vif_util [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.311 243708 DEBUG nova.network.os_vif_util [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.314 243708 DEBUG nova.objects.instance [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'pci_devices' on Instance uuid afb36bc5-8bfe-44dc-8be5-f7a657debc98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.329 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <uuid>afb36bc5-8bfe-44dc-8be5-f7a657debc98</uuid>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <name>instance-00000012</name>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:name>tempest-TestVolumeBootPattern-server-1364564926</nova:name>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:18:44</nova:creationTime>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:user uuid="9b8c4a2342e4420d8140b403edbcba5a">tempest-TestVolumeBootPattern-236547311-project-member</nova:user>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:project uuid="27927978f9684df1a72cecb32505e93b">tempest-TestVolumeBootPattern-236547311</nova:project>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <nova:port uuid="d83adae9-5340-4b94-ba3b-9b4adc9ac632">
Dec 13 04:18:45 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <system>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="serial">afb36bc5-8bfe-44dc-8be5-f7a657debc98</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="uuid">afb36bc5-8bfe-44dc-8be5-f7a657debc98</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </system>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <os>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </os>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <features>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </features>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config">
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </source>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-7d331c3e-8259-4a03-a8d4-36a08f54e707">
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </source>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:18:45 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <serial>7d331c3e-8259-4a03-a8d4-36a08f54e707</serial>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:9d:01:12"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <target dev="tapd83adae9-53"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/console.log" append="off"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <video>
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </video>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:18:45 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:18:45 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:18:45 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:18:45 compute-0 nova_compute[243704]: </domain>
Dec 13 04:18:45 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.331 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Preparing to wait for external event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.331 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.332 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.332 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.334 243708 DEBUG nova.virt.libvirt.vif [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1364564926',display_name='tempest-TestVolumeBootPattern-server-1364564926',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1364564926',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-2k8ugr6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:18:40Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=afb36bc5-8bfe-44dc-8be5-f7a657debc98,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.334 243708 DEBUG nova.network.os_vif_util [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.336 243708 DEBUG nova.network.os_vif_util [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.336 243708 DEBUG os_vif [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.337 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.338 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.339 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.345 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.345 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd83adae9-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.346 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd83adae9-53, col_values=(('external_ids', {'iface-id': 'd83adae9-5340-4b94-ba3b-9b4adc9ac632', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9d:01:12', 'vm-uuid': 'afb36bc5-8bfe-44dc-8be5-f7a657debc98'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.348 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:45 compute-0 NetworkManager[48899]: <info>  [1765599525.3501] manager: (tapd83adae9-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.353 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.355 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.357 243708 INFO os_vif [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53')
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.409 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.409 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.410 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] No VIF found with MAC fa:16:3e:9d:01:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.410 243708 INFO nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Using config drive
Dec 13 04:18:45 compute-0 nova_compute[243704]: 2025-12-13 04:18:45.432 243708 DEBUG nova.storage.rbd_utils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:18:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1374: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 6.3 KiB/s wr, 90 op/s
Dec 13 04:18:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2400160328' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2400160328' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e332 do_prune osdmap full prune enabled
Dec 13 04:18:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3743147439' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: pgmap v1374: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 64 KiB/s rd, 6.3 KiB/s wr, 90 op/s
Dec 13 04:18:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2400160328' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2400160328' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e333 e333: 3 total, 3 up, 3 in
Dec 13 04:18:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e333: 3 total, 3 up, 3 in
Dec 13 04:18:46 compute-0 nova_compute[243704]: 2025-12-13 04:18:46.386 243708 INFO nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Creating config drive at /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config
Dec 13 04:18:46 compute-0 nova_compute[243704]: 2025-12-13 04:18:46.392 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30dnztrk execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:46 compute-0 nova_compute[243704]: 2025-12-13 04:18:46.524 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30dnztrk" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:46 compute-0 nova_compute[243704]: 2025-12-13 04:18:46.552 243708 DEBUG nova.storage.rbd_utils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] rbd image afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:18:46 compute-0 nova_compute[243704]: 2025-12-13 04:18:46.557 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:18:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2282934621' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2282934621' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e333 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.323 243708 DEBUG nova.network.neutron [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updated VIF entry in instance network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.325 243708 DEBUG nova.network.neutron [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.339 243708 DEBUG oslo_concurrency.lockutils [req-a41d9ce1-3298-4752-ad47-a64b8f7edf2f req-66a705af-ef84-40d0-b4cd-2137ada9c85a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:18:47 compute-0 ceph-mon[75071]: osdmap e333: 3 total, 3 up, 3 in
Dec 13 04:18:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2282934621' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2282934621' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.495 243708 DEBUG oslo_concurrency.processutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config afb36bc5-8bfe-44dc-8be5-f7a657debc98_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.938s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.496 243708 INFO nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Deleting local config drive /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98/disk.config because it was imported into RBD.
Dec 13 04:18:47 compute-0 kernel: tapd83adae9-53: entered promiscuous mode
Dec 13 04:18:47 compute-0 NetworkManager[48899]: <info>  [1765599527.5471] manager: (tapd83adae9-53): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Dec 13 04:18:47 compute-0 ovn_controller[145204]: 2025-12-13T04:18:47Z|00175|binding|INFO|Claiming lport d83adae9-5340-4b94-ba3b-9b4adc9ac632 for this chassis.
Dec 13 04:18:47 compute-0 ovn_controller[145204]: 2025-12-13T04:18:47Z|00176|binding|INFO|d83adae9-5340-4b94-ba3b-9b4adc9ac632: Claiming fa:16:3e:9d:01:12 10.100.0.13
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.546 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:47 compute-0 ovn_controller[145204]: 2025-12-13T04:18:47Z|00177|binding|INFO|Setting lport d83adae9-5340-4b94-ba3b-9b4adc9ac632 ovn-installed in OVS
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.566 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1376: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 5.2 KiB/s wr, 74 op/s
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.569 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:47 compute-0 systemd-machined[206767]: New machine qemu-18-instance-00000012.
Dec 13 04:18:47 compute-0 systemd[1]: Started Virtual Machine qemu-18-instance-00000012.
Dec 13 04:18:47 compute-0 systemd-udevd[266471]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:18:47 compute-0 NetworkManager[48899]: <info>  [1765599527.6337] device (tapd83adae9-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:18:47 compute-0 NetworkManager[48899]: <info>  [1765599527.6343] device (tapd83adae9-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:18:47 compute-0 podman[266455]: 2025-12-13 04:18:47.659483203 +0000 UTC m=+0.073925821 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.664 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.736 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:01:12 10.100.0.13'], port_security=['fa:16:3e:9d:01:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'afb36bc5-8bfe-44dc-8be5-f7a657debc98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=d83adae9-5340-4b94-ba3b-9b4adc9ac632) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:18:47 compute-0 ovn_controller[145204]: 2025-12-13T04:18:47Z|00178|binding|INFO|Setting lport d83adae9-5340-4b94-ba3b-9b4adc9ac632 up in Southbound
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.738 154842 INFO neutron.agent.ovn.metadata.agent [-] Port d83adae9-5340-4b94-ba3b-9b4adc9ac632 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 bound to our chassis
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.739 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.757 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b0157b13-e001-4579-9a40-5cd6ac9803c6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.786 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[9b0bf6c3-f4f1-4deb-b50a-8b8805441fbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.789 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c02526-529f-421f-ba97-211b0eac050c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.815 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b3c52107-cf72-47e4-9e26-3da28efa1225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.833 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[41eee9c3-40db-4154-b99c-b6ceb7c383d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421377, 'reachable_time': 42862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 266524, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.849 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2eaa8c7d-45b6-420c-8d6c-a2abdca8d61a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421388, 'tstamp': 421388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266529, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421390, 'tstamp': 421390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 266529, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.852 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:47 compute-0 nova_compute[243704]: 2025-12-13 04:18:47.855 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.855 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.856 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.856 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:18:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:18:47.857 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.017 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599528.0165827, afb36bc5-8bfe-44dc-8be5-f7a657debc98 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.017 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] VM Started (Lifecycle Event)
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.050 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.054 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599528.0167537, afb36bc5-8bfe-44dc-8be5-f7a657debc98 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.055 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] VM Paused (Lifecycle Event)
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.071 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.076 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.112 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:18:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e333 do_prune osdmap full prune enabled
Dec 13 04:18:48 compute-0 ceph-mon[75071]: pgmap v1376: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 52 KiB/s rd, 5.2 KiB/s wr, 74 op/s
Dec 13 04:18:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e334 e334: 3 total, 3 up, 3 in
Dec 13 04:18:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e334: 3 total, 3 up, 3 in
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.790 243708 DEBUG nova.compute.manager [req-10c4c317-2d64-4bf7-90e8-577932bdf800 req-1edf6692-e7e8-467b-9b07-59f64137e97e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.791 243708 DEBUG oslo_concurrency.lockutils [req-10c4c317-2d64-4bf7-90e8-577932bdf800 req-1edf6692-e7e8-467b-9b07-59f64137e97e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.791 243708 DEBUG oslo_concurrency.lockutils [req-10c4c317-2d64-4bf7-90e8-577932bdf800 req-1edf6692-e7e8-467b-9b07-59f64137e97e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.792 243708 DEBUG oslo_concurrency.lockutils [req-10c4c317-2d64-4bf7-90e8-577932bdf800 req-1edf6692-e7e8-467b-9b07-59f64137e97e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.792 243708 DEBUG nova.compute.manager [req-10c4c317-2d64-4bf7-90e8-577932bdf800 req-1edf6692-e7e8-467b-9b07-59f64137e97e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Processing event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.792 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.796 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599528.7965267, afb36bc5-8bfe-44dc-8be5-f7a657debc98 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.797 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] VM Resumed (Lifecycle Event)
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.799 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.803 243708 INFO nova.virt.libvirt.driver [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Instance spawned successfully.
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.803 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.824 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.832 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.836 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.836 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.837 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.837 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.837 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.838 243708 DEBUG nova.virt.libvirt.driver [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:18:48 compute-0 nova_compute[243704]: 2025-12-13 04:18:48.892 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:18:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1378: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 34 KiB/s wr, 221 op/s
Dec 13 04:18:49 compute-0 ceph-mon[75071]: osdmap e334: 3 total, 3 up, 3 in
Dec 13 04:18:49 compute-0 nova_compute[243704]: 2025-12-13 04:18:49.793 243708 INFO nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Took 7.70 seconds to spawn the instance on the hypervisor.
Dec 13 04:18:49 compute-0 nova_compute[243704]: 2025-12-13 04:18:49.794 243708 DEBUG nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:18:49 compute-0 nova_compute[243704]: 2025-12-13 04:18:49.870 243708 INFO nova.compute.manager [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Took 10.73 seconds to build instance.
Dec 13 04:18:49 compute-0 nova_compute[243704]: 2025-12-13 04:18:49.887 243708 DEBUG oslo_concurrency.lockutils [None req-12cb9130-0c61-47b5-9c33-b8da788c9f21 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912974671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:50 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2912974671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.350 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:50 compute-0 ceph-mon[75071]: pgmap v1378: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 163 KiB/s rd, 34 KiB/s wr, 221 op/s
Dec 13 04:18:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2912974671' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:50 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2912974671' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.988 243708 DEBUG nova.compute.manager [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.988 243708 DEBUG oslo_concurrency.lockutils [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.988 243708 DEBUG oslo_concurrency.lockutils [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.989 243708 DEBUG oslo_concurrency.lockutils [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.989 243708 DEBUG nova.compute.manager [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] No waiting events found dispatching network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:18:50 compute-0 nova_compute[243704]: 2025-12-13 04:18:50.989 243708 WARNING nova.compute.manager [req-19456acf-9338-45e1-9ab5-2fed97be427c req-c6bb2d45-8b6e-4308-a93d-b707182ec47b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received unexpected event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 for instance with vm_state active and task_state None.
Dec 13 04:18:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1379: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 25 KiB/s wr, 140 op/s
Dec 13 04:18:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e334 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e334 do_prune osdmap full prune enabled
Dec 13 04:18:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e335 e335: 3 total, 3 up, 3 in
Dec 13 04:18:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e335: 3 total, 3 up, 3 in
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.7731711115697065e-06 of space, bias 1.0, pg target 0.002031951333470912 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0011130584173543445 of space, bias 1.0, pg target 0.33391752520630336 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4020551525149996e-06 of space, bias 1.0, pg target 0.0007206165457544998 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665531729024828 of space, bias 1.0, pg target 0.19996595187074484 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.188149526419332e-06 of space, bias 4.0, pg target 0.0014257794317031982 quantized to 16 (current 16)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:18:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:18:52 compute-0 sudo[266536]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:18:52 compute-0 sudo[266536]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:52 compute-0 sudo[266536]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:52 compute-0 sudo[266561]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 04:18:52 compute-0 sudo[266561]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:52 compute-0 nova_compute[243704]: 2025-12-13 04:18:52.666 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1381: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 24 KiB/s wr, 124 op/s
Dec 13 04:18:54 compute-0 ceph-mon[75071]: pgmap v1379: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 25 KiB/s wr, 140 op/s
Dec 13 04:18:54 compute-0 ceph-mon[75071]: osdmap e335: 3 total, 3 up, 3 in
Dec 13 04:18:55 compute-0 nova_compute[243704]: 2025-12-13 04:18:55.352 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:55 compute-0 podman[266629]: 2025-12-13 04:18:55.397473759 +0000 UTC m=+2.351596612 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1382: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 22 KiB/s wr, 218 op/s
Dec 13 04:18:55 compute-0 podman[266650]: 2025-12-13 04:18:55.627172828 +0000 UTC m=+0.056923465 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:18:55 compute-0 podman[266629]: 2025-12-13 04:18:55.682320293 +0000 UTC m=+2.636443096 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Dec 13 04:18:55 compute-0 ceph-mon[75071]: pgmap v1381: 305 pgs: 2 active+clean+snaptrim, 9 active+clean+snaptrim_wait, 294 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 94 KiB/s rd, 24 KiB/s wr, 124 op/s
Dec 13 04:18:56 compute-0 sudo[266561]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:18:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:18:56 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:56 compute-0 sudo[266822]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:18:56 compute-0 sudo[266822]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:56 compute-0 sudo[266822]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:56 compute-0 sudo[266847]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:18:56 compute-0 sudo[266847]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e335 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:18:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e335 do_prune osdmap full prune enabled
Dec 13 04:18:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e336 e336: 3 total, 3 up, 3 in
Dec 13 04:18:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e336: 3 total, 3 up, 3 in
Dec 13 04:18:56 compute-0 ceph-mon[75071]: pgmap v1382: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 22 KiB/s wr, 218 op/s
Dec 13 04:18:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:57 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:57 compute-0 sudo[266847]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:18:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:18:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:57 compute-0 sudo[266903]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:18:57 compute-0 sudo[266903]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:57 compute-0 sudo[266903]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:57 compute-0 sudo[266928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:18:57 compute-0 sudo[266928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1384: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 383 B/s wr, 105 op/s
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.652301098 +0000 UTC m=+0.051797336 container create 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:18:57 compute-0 nova_compute[243704]: 2025-12-13 04:18:57.669 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:18:57 compute-0 systemd[1]: Started libpod-conmon-927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b.scope.
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.625237494 +0000 UTC m=+0.024733832 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.746489543 +0000 UTC m=+0.145985811 container init 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.755926388 +0000 UTC m=+0.155422626 container start 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.759778272 +0000 UTC m=+0.159274540 container attach 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:18:57 compute-0 eager_wiles[266981]: 167 167
Dec 13 04:18:57 compute-0 systemd[1]: libpod-927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b.scope: Deactivated successfully.
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.762913297 +0000 UTC m=+0.162409535 container died 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:18:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-f335d1d6d70f41371619198bd8c89d304a122f508f288a8c14963d439864e631-merged.mount: Deactivated successfully.
Dec 13 04:18:57 compute-0 podman[266965]: 2025-12-13 04:18:57.80538179 +0000 UTC m=+0.204878038 container remove 927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_wiles, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Dec 13 04:18:57 compute-0 systemd[1]: libpod-conmon-927ff9b43ac878bc457d43687b7259fda1606bf2a615696f810209eec404762b.scope: Deactivated successfully.
Dec 13 04:18:57 compute-0 podman[267006]: 2025-12-13 04:18:57.987661912 +0000 UTC m=+0.050044748 container create fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:18:58 compute-0 ceph-mon[75071]: osdmap e336: 3 total, 3 up, 3 in
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:18:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:18:58 compute-0 systemd[1]: Started libpod-conmon-fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c.scope.
Dec 13 04:18:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:57.968963575 +0000 UTC m=+0.031346431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:58.081329722 +0000 UTC m=+0.143712588 container init fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:58.090742336 +0000 UTC m=+0.153125172 container start fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:58.094368015 +0000 UTC m=+0.156750871 container attach fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:18:58 compute-0 boring_hellman[267023]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:18:58 compute-0 boring_hellman[267023]: --> All data devices are unavailable
Dec 13 04:18:58 compute-0 systemd[1]: libpod-fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c.scope: Deactivated successfully.
Dec 13 04:18:58 compute-0 conmon[267023]: conmon fcf34acf1d543e3c543a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c.scope/container/memory.events
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:58.598239318 +0000 UTC m=+0.660622154 container died fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:18:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-2364ed16f9921ec0dbac183a2526fc78ce01fa63a1d5753eb19be9b7190caae3-merged.mount: Deactivated successfully.
Dec 13 04:18:58 compute-0 podman[267006]: 2025-12-13 04:18:58.640573275 +0000 UTC m=+0.702956111 container remove fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_hellman, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:18:58 compute-0 systemd[1]: libpod-conmon-fcf34acf1d543e3c543a4f83bee596018a7663f7afbacfe0114890f074cccd8c.scope: Deactivated successfully.
Dec 13 04:18:58 compute-0 sudo[266928]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:58 compute-0 sudo[267055]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:18:58 compute-0 sudo[267055]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:58 compute-0 sudo[267055]: pam_unix(sudo:session): session closed for user root
Dec 13 04:18:58 compute-0 sudo[267080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:18:58 compute-0 sudo[267080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:18:59 compute-0 ceph-mon[75071]: pgmap v1384: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 383 B/s wr, 105 op/s
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.125841623 +0000 UTC m=+0.048557448 container create 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:18:59 compute-0 systemd[1]: Started libpod-conmon-4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463.scope.
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.107816213 +0000 UTC m=+0.030532078 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.229127213 +0000 UTC m=+0.151843058 container init 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.235992649 +0000 UTC m=+0.158708474 container start 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.240002738 +0000 UTC m=+0.162718563 container attach 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:18:59 compute-0 unruffled_dewdney[267134]: 167 167
Dec 13 04:18:59 compute-0 systemd[1]: libpod-4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463.scope: Deactivated successfully.
Dec 13 04:18:59 compute-0 conmon[267134]: conmon 4099218ea2e4d64c4a4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463.scope/container/memory.events
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.24229367 +0000 UTC m=+0.165009505 container died 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce495eceb07331966ade2431b2524647f498d538238dad4acb5f903c83ded92b-merged.mount: Deactivated successfully.
Dec 13 04:18:59 compute-0 podman[267118]: 2025-12-13 04:18:59.284019591 +0000 UTC m=+0.206735416 container remove 4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=unruffled_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:18:59 compute-0 systemd[1]: libpod-conmon-4099218ea2e4d64c4a4b53e52db3885b19fac4ed31067f05671b3d4bbfb1d463.scope: Deactivated successfully.
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.486743669 +0000 UTC m=+0.047825928 container create 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:18:59 compute-0 systemd[1]: Started libpod-conmon-32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa.scope.
Dec 13 04:18:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.470078757 +0000 UTC m=+0.031161036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5699728e4b6a6cd4d93ba3e931040328b7a80d494c61036681d049f63ca8f688/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5699728e4b6a6cd4d93ba3e931040328b7a80d494c61036681d049f63ca8f688/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5699728e4b6a6cd4d93ba3e931040328b7a80d494c61036681d049f63ca8f688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5699728e4b6a6cd4d93ba3e931040328b7a80d494c61036681d049f63ca8f688/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:18:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1385: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.4 KiB/s wr, 108 op/s
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.574424786 +0000 UTC m=+0.135507065 container init 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.583965244 +0000 UTC m=+0.145047503 container start 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.587578843 +0000 UTC m=+0.148661122 container attach 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:18:59 compute-0 nova_compute[243704]: 2025-12-13 04:18:59.853 243708 DEBUG nova.compute.manager [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:18:59 compute-0 nova_compute[243704]: 2025-12-13 04:18:59.857 243708 DEBUG nova.compute.manager [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing instance network info cache due to event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:18:59 compute-0 nova_compute[243704]: 2025-12-13 04:18:59.857 243708 DEBUG oslo_concurrency.lockutils [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:18:59 compute-0 nova_compute[243704]: 2025-12-13 04:18:59.858 243708 DEBUG oslo_concurrency.lockutils [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:18:59 compute-0 nova_compute[243704]: 2025-12-13 04:18:59.858 243708 DEBUG nova.network.neutron [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]: {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     "0": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "devices": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "/dev/loop3"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             ],
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_name": "ceph_lv0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_size": "21470642176",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "name": "ceph_lv0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "tags": {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.crush_device_class": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.encrypted": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_id": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.vdo": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.with_tpm": "0"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             },
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "vg_name": "ceph_vg0"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         }
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     ],
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     "1": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "devices": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "/dev/loop4"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             ],
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_name": "ceph_lv1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_size": "21470642176",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "name": "ceph_lv1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "tags": {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.crush_device_class": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.encrypted": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_id": "1",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.vdo": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.with_tpm": "0"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             },
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "vg_name": "ceph_vg1"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         }
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     ],
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     "2": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "devices": [
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "/dev/loop5"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             ],
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_name": "ceph_lv2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_size": "21470642176",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "name": "ceph_lv2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "tags": {
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.cluster_name": "ceph",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.crush_device_class": "",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.encrypted": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.objectstore": "bluestore",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osd_id": "2",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.vdo": "0",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:                 "ceph.with_tpm": "0"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             },
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "type": "block",
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:             "vg_name": "ceph_vg2"
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:         }
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]:     ]
Dec 13 04:18:59 compute-0 compassionate_albattani[267174]: }
Dec 13 04:18:59 compute-0 systemd[1]: libpod-32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa.scope: Deactivated successfully.
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.91428451 +0000 UTC m=+0.475366789 container died 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:18:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-5699728e4b6a6cd4d93ba3e931040328b7a80d494c61036681d049f63ca8f688-merged.mount: Deactivated successfully.
Dec 13 04:18:59 compute-0 podman[267157]: 2025-12-13 04:18:59.948957621 +0000 UTC m=+0.510039880 container remove 32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_albattani, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:18:59 compute-0 systemd[1]: libpod-conmon-32ee415230d3f005d2d1bd777da5e9b24dc5950731a2a7937cc00eb729b128aa.scope: Deactivated successfully.
Dec 13 04:18:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:18:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122439078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:18:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:18:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122439078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e336 do_prune osdmap full prune enabled
Dec 13 04:19:00 compute-0 sudo[267080]: pam_unix(sudo:session): session closed for user root
Dec 13 04:19:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e337 e337: 3 total, 3 up, 3 in
Dec 13 04:19:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1122439078' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1122439078' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e337: 3 total, 3 up, 3 in
Dec 13 04:19:00 compute-0 sudo[267194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:19:00 compute-0 sudo[267194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:19:00 compute-0 sudo[267194]: pam_unix(sudo:session): session closed for user root
Dec 13 04:19:00 compute-0 sudo[267219]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:19:00 compute-0 sudo[267219]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:19:00 compute-0 nova_compute[243704]: 2025-12-13 04:19:00.354 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:00 compute-0 podman[267255]: 2025-12-13 04:19:00.422303926 +0000 UTC m=+0.039806351 container create 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:19:00 compute-0 systemd[1]: Started libpod-conmon-18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009.scope.
Dec 13 04:19:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:19:00 compute-0 podman[267255]: 2025-12-13 04:19:00.405906361 +0000 UTC m=+0.023408806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:00 compute-0 podman[267255]: 2025-12-13 04:19:00.50875681 +0000 UTC m=+0.126259255 container init 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:19:00 compute-0 podman[267255]: 2025-12-13 04:19:00.520333084 +0000 UTC m=+0.137835509 container start 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:19:00 compute-0 podman[267255]: 2025-12-13 04:19:00.523718565 +0000 UTC m=+0.141221040 container attach 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:19:00 compute-0 hardcore_thompson[267271]: 167 167
Dec 13 04:19:00 compute-0 systemd[1]: libpod-18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009.scope: Deactivated successfully.
Dec 13 04:19:00 compute-0 podman[267276]: 2025-12-13 04:19:00.581371338 +0000 UTC m=+0.029781138 container died 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:19:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-2cc3f16cbd208e9004d25893e1c9b80d4c4223d972cab5934ce469edef4fe975-merged.mount: Deactivated successfully.
Dec 13 04:19:00 compute-0 podman[267276]: 2025-12-13 04:19:00.618458424 +0000 UTC m=+0.066868174 container remove 18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hardcore_thompson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:19:00 compute-0 systemd[1]: libpod-conmon-18dee0433b4d74710d9cb85144cd05b21d655a6f857963abab009732c5f3a009.scope: Deactivated successfully.
Dec 13 04:19:00 compute-0 podman[267297]: 2025-12-13 04:19:00.818805746 +0000 UTC m=+0.045835253 container create 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:19:00 compute-0 systemd[1]: Started libpod-conmon-9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d.scope.
Dec 13 04:19:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343acbcdf8706bc7f5fc072d376ac02719be7ebd82e0381af5c386944d7c1815/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343acbcdf8706bc7f5fc072d376ac02719be7ebd82e0381af5c386944d7c1815/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343acbcdf8706bc7f5fc072d376ac02719be7ebd82e0381af5c386944d7c1815/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:00 compute-0 podman[267297]: 2025-12-13 04:19:00.800258814 +0000 UTC m=+0.027288341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:19:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/343acbcdf8706bc7f5fc072d376ac02719be7ebd82e0381af5c386944d7c1815/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:19:00 compute-0 podman[267297]: 2025-12-13 04:19:00.908077507 +0000 UTC m=+0.135107034 container init 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:19:00 compute-0 nova_compute[243704]: 2025-12-13 04:19:00.911 243708 DEBUG nova.network.neutron [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updated VIF entry in instance network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:19:00 compute-0 nova_compute[243704]: 2025-12-13 04:19:00.912 243708 DEBUG nova.network.neutron [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:00 compute-0 podman[267297]: 2025-12-13 04:19:00.916773572 +0000 UTC m=+0.143803079 container start 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:19:00 compute-0 podman[267297]: 2025-12-13 04:19:00.920949796 +0000 UTC m=+0.147979323 container attach 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:19:00 compute-0 nova_compute[243704]: 2025-12-13 04:19:00.936 243708 DEBUG oslo_concurrency.lockutils [req-9bac7a59-ce19-4a73-96a8-23947a0548b9 req-feab8a05-b2e7-4fa0-82ff-8ad7e63e64ff 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:19:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e337 do_prune osdmap full prune enabled
Dec 13 04:19:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e338 e338: 3 total, 3 up, 3 in
Dec 13 04:19:01 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e338: 3 total, 3 up, 3 in
Dec 13 04:19:01 compute-0 ceph-mon[75071]: pgmap v1385: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.4 KiB/s wr, 108 op/s
Dec 13 04:19:01 compute-0 ceph-mon[75071]: osdmap e337: 3 total, 3 up, 3 in
Dec 13 04:19:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1388: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Dec 13 04:19:01 compute-0 ovn_controller[145204]: 2025-12-13T04:19:01Z|00040|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.13
Dec 13 04:19:01 compute-0 ovn_controller[145204]: 2025-12-13T04:19:01Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9d:01:12 10.100.0.13
Dec 13 04:19:01 compute-0 lvm[267391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:19:01 compute-0 lvm[267391]: VG ceph_vg0 finished
Dec 13 04:19:01 compute-0 lvm[267392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:19:01 compute-0 lvm[267392]: VG ceph_vg1 finished
Dec 13 04:19:01 compute-0 lvm[267394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:19:01 compute-0 lvm[267394]: VG ceph_vg2 finished
Dec 13 04:19:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e338 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:02 compute-0 ceph-mon[75071]: osdmap e338: 3 total, 3 up, 3 in
Dec 13 04:19:02 compute-0 boring_bartik[267313]: {}
Dec 13 04:19:02 compute-0 systemd[1]: libpod-9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d.scope: Deactivated successfully.
Dec 13 04:19:02 compute-0 podman[267297]: 2025-12-13 04:19:02.100488069 +0000 UTC m=+1.327517576 container died 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:19:02 compute-0 systemd[1]: libpod-9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d.scope: Consumed 1.996s CPU time.
Dec 13 04:19:02 compute-0 podman[267396]: 2025-12-13 04:19:02.160846245 +0000 UTC m=+0.121957137 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:19:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-343acbcdf8706bc7f5fc072d376ac02719be7ebd82e0381af5c386944d7c1815-merged.mount: Deactivated successfully.
Dec 13 04:19:02 compute-0 podman[267297]: 2025-12-13 04:19:02.399688862 +0000 UTC m=+1.626718369 container remove 9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=boring_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:19:02 compute-0 systemd[1]: libpod-conmon-9c9cf787676bd834f30a33bbf788176e890e4ca0cef950b254f7f1c76cbf118d.scope: Deactivated successfully.
Dec 13 04:19:02 compute-0 sudo[267219]: pam_unix(sudo:session): session closed for user root
Dec 13 04:19:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:19:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:19:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:19:02 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:19:02 compute-0 sudo[267436]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:19:02 compute-0 sudo[267436]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:19:02 compute-0 sudo[267436]: pam_unix(sudo:session): session closed for user root
Dec 13 04:19:02 compute-0 nova_compute[243704]: 2025-12-13 04:19:02.671 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:03 compute-0 ceph-mon[75071]: pgmap v1388: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1.3 KiB/s wr, 3 op/s
Dec 13 04:19:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:19:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:19:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004686452' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3004686452' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:03 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:03.151 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:19:03 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:03.153 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:19:03 compute-0 nova_compute[243704]: 2025-12-13 04:19:03.153 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1389: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.4 KiB/s wr, 36 op/s
Dec 13 04:19:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3004686452' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3004686452' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:05 compute-0 ceph-mon[75071]: pgmap v1389: 305 pgs: 305 active+clean; 169 MiB data, 376 MiB used, 60 GiB / 60 GiB avail; 568 KiB/s rd, 3.4 KiB/s wr, 36 op/s
Dec 13 04:19:05 compute-0 nova_compute[243704]: 2025-12-13 04:19:05.356 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1390: 305 pgs: 305 active+clean; 183 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 744 KiB/s wr, 143 op/s
Dec 13 04:19:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e338 do_prune osdmap full prune enabled
Dec 13 04:19:06 compute-0 ceph-mon[75071]: pgmap v1390: 305 pgs: 305 active+clean; 183 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 744 KiB/s wr, 143 op/s
Dec 13 04:19:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e339 e339: 3 total, 3 up, 3 in
Dec 13 04:19:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e339: 3 total, 3 up, 3 in
Dec 13 04:19:06 compute-0 ovn_controller[145204]: 2025-12-13T04:19:06Z|00042|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.10 does not match offer 10.100.0.13
Dec 13 04:19:06 compute-0 ovn_controller[145204]: 2025-12-13T04:19:06Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:9d:01:12 10.100.0.13
Dec 13 04:19:06 compute-0 ovn_controller[145204]: 2025-12-13T04:19:06Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9d:01:12 10.100.0.13
Dec 13 04:19:06 compute-0 ovn_controller[145204]: 2025-12-13T04:19:06Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9d:01:12 10.100.0.13
Dec 13 04:19:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e339 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e339 do_prune osdmap full prune enabled
Dec 13 04:19:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e340 e340: 3 total, 3 up, 3 in
Dec 13 04:19:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e340: 3 total, 3 up, 3 in
Dec 13 04:19:07 compute-0 ceph-mon[75071]: osdmap e339: 3 total, 3 up, 3 in
Dec 13 04:19:07 compute-0 ceph-mon[75071]: osdmap e340: 3 total, 3 up, 3 in
Dec 13 04:19:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1393: 305 pgs: 305 active+clean; 183 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 905 KiB/s wr, 171 op/s
Dec 13 04:19:07 compute-0 nova_compute[243704]: 2025-12-13 04:19:07.672 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:08.156 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:08 compute-0 ceph-mon[75071]: pgmap v1393: 305 pgs: 305 active+clean; 183 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 905 KiB/s wr, 171 op/s
Dec 13 04:19:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1394: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 873 KiB/s wr, 177 op/s
Dec 13 04:19:10 compute-0 nova_compute[243704]: 2025-12-13 04:19:10.358 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:10 compute-0 ceph-mon[75071]: pgmap v1394: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 873 KiB/s wr, 177 op/s
Dec 13 04:19:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e340 do_prune osdmap full prune enabled
Dec 13 04:19:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1395: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 871 KiB/s wr, 150 op/s
Dec 13 04:19:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e341 e341: 3 total, 3 up, 3 in
Dec 13 04:19:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e341: 3 total, 3 up, 3 in
Dec 13 04:19:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:12 compute-0 nova_compute[243704]: 2025-12-13 04:19:12.674 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:12 compute-0 ceph-mon[75071]: pgmap v1395: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.8 MiB/s rd, 871 KiB/s wr, 150 op/s
Dec 13 04:19:12 compute-0 ceph-mon[75071]: osdmap e341: 3 total, 3 up, 3 in
Dec 13 04:19:12 compute-0 podman[267461]: 2025-12-13 04:19:12.945093847 +0000 UTC m=+0.081597734 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Dec 13 04:19:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1397: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 135 KiB/s wr, 46 op/s
Dec 13 04:19:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2648759440' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2648759440' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:13 compute-0 nova_compute[243704]: 2025-12-13 04:19:13.886 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2648759440' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2648759440' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:14 compute-0 nova_compute[243704]: 2025-12-13 04:19:14.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:14 compute-0 nova_compute[243704]: 2025-12-13 04:19:14.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:19:14 compute-0 ceph-mon[75071]: pgmap v1397: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 135 KiB/s wr, 46 op/s
Dec 13 04:19:15 compute-0 nova_compute[243704]: 2025-12-13 04:19:15.347 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:19:15 compute-0 nova_compute[243704]: 2025-12-13 04:19:15.348 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:19:15 compute-0 nova_compute[243704]: 2025-12-13 04:19:15.348 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:19:15 compute-0 nova_compute[243704]: 2025-12-13 04:19:15.365 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1398: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 120 KiB/s wr, 78 op/s
Dec 13 04:19:16 compute-0 ceph-mon[75071]: pgmap v1398: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 120 KiB/s wr, 78 op/s
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.593 243708 DEBUG nova.compute.manager [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.593 243708 DEBUG nova.compute.manager [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing instance network info cache due to event network-changed-d83adae9-5340-4b94-ba3b-9b4adc9ac632. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.593 243708 DEBUG oslo_concurrency.lockutils [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.593 243708 DEBUG oslo_concurrency.lockutils [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.594 243708 DEBUG nova.network.neutron [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Refreshing network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.677 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.678 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.678 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.679 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.679 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.680 243708 INFO nova.compute.manager [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Terminating instance
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.682 243708 DEBUG nova.compute.manager [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:19:16 compute-0 kernel: tapd83adae9-53 (unregistering): left promiscuous mode
Dec 13 04:19:16 compute-0 NetworkManager[48899]: <info>  [1765599556.7415] device (tapd83adae9-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:19:16 compute-0 ovn_controller[145204]: 2025-12-13T04:19:16Z|00179|binding|INFO|Releasing lport d83adae9-5340-4b94-ba3b-9b4adc9ac632 from this chassis (sb_readonly=0)
Dec 13 04:19:16 compute-0 ovn_controller[145204]: 2025-12-13T04:19:16Z|00180|binding|INFO|Setting lport d83adae9-5340-4b94-ba3b-9b4adc9ac632 down in Southbound
Dec 13 04:19:16 compute-0 ovn_controller[145204]: 2025-12-13T04:19:16Z|00181|binding|INFO|Removing iface tapd83adae9-53 ovn-installed in OVS
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.752 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.754 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.760 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9d:01:12 10.100.0.13'], port_security=['fa:16:3e:9d:01:12 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'afb36bc5-8bfe-44dc-8be5-f7a657debc98', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=d83adae9-5340-4b94-ba3b-9b4adc9ac632) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.761 154842 INFO neutron.agent.ovn.metadata.agent [-] Port d83adae9-5340-4b94-ba3b-9b4adc9ac632 in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.763 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.782 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.789 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e968549f-dc43-4936-b533-60e83a51b8f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Deactivated successfully.
Dec 13 04:19:16 compute-0 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d00000012.scope: Consumed 14.698s CPU time.
Dec 13 04:19:16 compute-0 systemd-machined[206767]: Machine qemu-18-instance-00000012 terminated.
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.825 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[58676426-4697-46ba-b476-aa85c9c7a502]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.829 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[1a855ec3-babd-4755-bd6d-b39fa1623aea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.861 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[233747f7-7d95-4136-9445-ff076667c86f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.882 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3297f973-5fc4-49b2-9706-1ed49f31a3ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfc553cd2-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:53:ae:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 700, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 58], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421377, 'reachable_time': 42862, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267490, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.904 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dc2c4df6-db78-4fd2-a08d-5eccc95e7b77]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421388, 'tstamp': 421388}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267491, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfc553cd2-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 421390, 'tstamp': 421390}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 267491, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.905 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.906 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.907 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.914 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.914 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc553cd2-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.914 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.915 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfc553cd2-50, col_values=(('external_ids', {'iface-id': '89e12177-98ba-49d1-8f15-68c87072167e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:16 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:16.915 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.920 243708 INFO nova.virt.libvirt.driver [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Instance destroyed successfully.
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.920 243708 DEBUG nova.objects.instance [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid afb36bc5-8bfe-44dc-8be5-f7a657debc98 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.938 243708 DEBUG nova.virt.libvirt.vif [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:18:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-1364564926',display_name='tempest-TestVolumeBootPattern-server-1364564926',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-1364564926',id=18,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:18:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-2k8ugr6x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:18:49Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=afb36bc5-8bfe-44dc-8be5-f7a657debc98,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.939 243708 DEBUG nova.network.os_vif_util [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.939 243708 DEBUG nova.network.os_vif_util [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.940 243708 DEBUG os_vif [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.941 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.942 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd83adae9-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.943 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.944 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.947 243708 INFO os_vif [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9d:01:12,bridge_name='br-int',has_traffic_filtering=True,id=d83adae9-5340-4b94-ba3b-9b4adc9ac632,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd83adae9-53')
Dec 13 04:19:16 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.973 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e341 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e341 do_prune osdmap full prune enabled
Dec 13 04:19:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e342 e342: 3 total, 3 up, 3 in
Dec 13 04:19:16 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e342: 3 total, 3 up, 3 in
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:16.999 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.000 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.003 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.037 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.038 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.038 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.038 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.039 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.067 243708 DEBUG nova.compute.manager [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-unplugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.068 243708 DEBUG oslo_concurrency.lockutils [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.068 243708 DEBUG oslo_concurrency.lockutils [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.068 243708 DEBUG oslo_concurrency.lockutils [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.069 243708 DEBUG nova.compute.manager [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] No waiting events found dispatching network-vif-unplugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.069 243708 DEBUG nova.compute.manager [req-5a7ed229-5548-4c09-9344-6aa6bba7f161 req-d4de5037-7f09-457c-a4a3-d3266e8c9309 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-unplugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.108 243708 INFO nova.virt.libvirt.driver [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Deleting instance files /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98_del
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.109 243708 INFO nova.virt.libvirt.driver [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Deletion of /var/lib/nova/instances/afb36bc5-8bfe-44dc-8be5-f7a657debc98_del complete
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.183 243708 INFO nova.compute.manager [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Took 0.50 seconds to destroy the instance on the hypervisor.
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.183 243708 DEBUG oslo.service.loopingcall [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.184 243708 DEBUG nova.compute.manager [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.184 243708 DEBUG nova.network.neutron [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:19:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426238128' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3426238128' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3858501537' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.601 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.658 243708 DEBUG nova.network.neutron [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1400: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.0 KiB/s wr, 49 op/s
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.677 243708 INFO nova.compute.manager [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Took 0.49 seconds to deallocate network for instance.
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.677 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.686 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.687 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.720 243708 DEBUG nova.network.neutron [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updated VIF entry in instance network info cache for port d83adae9-5340-4b94-ba3b-9b4adc9ac632. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.721 243708 DEBUG nova.network.neutron [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [{"id": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "address": "fa:16:3e:9d:01:12", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd83adae9-53", "ovs_interfaceid": "d83adae9-5340-4b94-ba3b-9b4adc9ac632", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.770 243708 DEBUG oslo_concurrency.lockutils [req-b904a14e-8f2a-421c-806a-6c859f013cf6 req-3e6ef3eb-3d08-4916-8256-45a5a49b549a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-afb36bc5-8bfe-44dc-8be5-f7a657debc98" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.831 243708 INFO nova.compute.manager [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Took 0.15 seconds to detach 1 volumes for instance.
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.841 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.842 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4256MB free_disk=59.987875228747725GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.842 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.843 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.895 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:17 compute-0 podman[267545]: 2025-12-13 04:19:17.944103261 +0000 UTC m=+0.086362451 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0)
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.948 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 5963e695-3cc7-4994-977e-b08fa7a682a1 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.949 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance afb36bc5-8bfe-44dc-8be5-f7a657debc98 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.949 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.949 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.968 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.987 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:19:17 compute-0 nova_compute[243704]: 2025-12-13 04:19:17.988 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:19:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e342 do_prune osdmap full prune enabled
Dec 13 04:19:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e343 e343: 3 total, 3 up, 3 in
Dec 13 04:19:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e343: 3 total, 3 up, 3 in
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.005 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:19:18 compute-0 ceph-mon[75071]: osdmap e342: 3 total, 3 up, 3 in
Dec 13 04:19:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3426238128' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3426238128' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3858501537' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.030 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.083 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022257588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.643 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.648 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.660 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.687 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.688 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.845s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.688 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:18 compute-0 nova_compute[243704]: 2025-12-13 04:19:18.743 243708 DEBUG oslo_concurrency.processutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:19 compute-0 ceph-mon[75071]: pgmap v1400: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 8.0 KiB/s wr, 49 op/s
Dec 13 04:19:19 compute-0 ceph-mon[75071]: osdmap e343: 3 total, 3 up, 3 in
Dec 13 04:19:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1022257588' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.116 243708 DEBUG nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.117 243708 DEBUG oslo_concurrency.lockutils [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.118 243708 DEBUG oslo_concurrency.lockutils [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.118 243708 DEBUG oslo_concurrency.lockutils [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.118 243708 DEBUG nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] No waiting events found dispatching network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.119 243708 WARNING nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received unexpected event network-vif-plugged-d83adae9-5340-4b94-ba3b-9b4adc9ac632 for instance with vm_state deleted and task_state None.
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.119 243708 DEBUG nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Received event network-vif-deleted-d83adae9-5340-4b94-ba3b-9b4adc9ac632 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.119 243708 INFO nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Neutron deleted interface d83adae9-5340-4b94-ba3b-9b4adc9ac632; detaching it from the instance and deleting it from the info cache
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.120 243708 DEBUG nova.network.neutron [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.138 243708 DEBUG nova.compute.manager [req-c121decf-8e62-41c2-898f-56a815fc6608 req-4f662756-2210-4545-81a1-a92052ec84c5 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Detach interface failed, port_id=d83adae9-5340-4b94-ba3b-9b4adc9ac632, reason: Instance afb36bc5-8bfe-44dc-8be5-f7a657debc98 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882
Dec 13 04:19:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019343923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4019343923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/233667130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.275 243708 DEBUG oslo_concurrency.processutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.280 243708 DEBUG nova.compute.provider_tree [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.295 243708 DEBUG nova.scheduler.client.report [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.327 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.357 243708 INFO nova.scheduler.client.report [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance afb36bc5-8bfe-44dc-8be5-f7a657debc98
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.418 243708 DEBUG oslo_concurrency.lockutils [None req-0ce461b7-d553-49b6-a670-eceea9d663ea 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "afb36bc5-8bfe-44dc-8be5-f7a657debc98" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.562 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.563 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.564 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.564 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1402: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 165 op/s
Dec 13 04:19:19 compute-0 nova_compute[243704]: 2025-12-13 04:19:19.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e343 do_prune osdmap full prune enabled
Dec 13 04:19:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e344 e344: 3 total, 3 up, 3 in
Dec 13 04:19:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e344: 3 total, 3 up, 3 in
Dec 13 04:19:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4019343923' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4019343923' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/233667130' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664126323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2664126323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:21 compute-0 ceph-mon[75071]: pgmap v1402: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 426 KiB/s rd, 2.7 MiB/s wr, 165 op/s
Dec 13 04:19:21 compute-0 ceph-mon[75071]: osdmap e344: 3 total, 3 up, 3 in
Dec 13 04:19:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2664126323' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2664126323' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1404: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 517 KiB/s rd, 3.5 MiB/s wr, 153 op/s
Dec 13 04:19:21 compute-0 nova_compute[243704]: 2025-12-13 04:19:21.991 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e344 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e344 do_prune osdmap full prune enabled
Dec 13 04:19:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e345 e345: 3 total, 3 up, 3 in
Dec 13 04:19:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e345: 3 total, 3 up, 3 in
Dec 13 04:19:22 compute-0 nova_compute[243704]: 2025-12-13 04:19:22.678 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:22 compute-0 nova_compute[243704]: 2025-12-13 04:19:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:19:22 compute-0 nova_compute[243704]: 2025-12-13 04:19:22.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:19:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e345 do_prune osdmap full prune enabled
Dec 13 04:19:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1406: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 221 op/s
Dec 13 04:19:23 compute-0 ceph-mon[75071]: pgmap v1404: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 517 KiB/s rd, 3.5 MiB/s wr, 153 op/s
Dec 13 04:19:23 compute-0 ceph-mon[75071]: osdmap e345: 3 total, 3 up, 3 in
Dec 13 04:19:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e346 e346: 3 total, 3 up, 3 in
Dec 13 04:19:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e346: 3 total, 3 up, 3 in
Dec 13 04:19:24 compute-0 ceph-mon[75071]: pgmap v1406: 305 pgs: 305 active+clean; 187 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 221 op/s
Dec 13 04:19:24 compute-0 ceph-mon[75071]: osdmap e346: 3 total, 3 up, 3 in
Dec 13 04:19:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1408: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 123 op/s
Dec 13 04:19:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e346 do_prune osdmap full prune enabled
Dec 13 04:19:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e347 e347: 3 total, 3 up, 3 in
Dec 13 04:19:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e347: 3 total, 3 up, 3 in
Dec 13 04:19:26 compute-0 nova_compute[243704]: 2025-12-13 04:19:26.993 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e347 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:27 compute-0 ceph-mon[75071]: pgmap v1408: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 123 op/s
Dec 13 04:19:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1410: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 123 op/s
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.680 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.708 243708 DEBUG nova.compute.manager [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.709 243708 DEBUG nova.compute.manager [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing instance network info cache due to event network-changed-ad72d283-b1a5-4889-9e04-0297897b4cad. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.709 243708 DEBUG oslo_concurrency.lockutils [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.709 243708 DEBUG oslo_concurrency.lockutils [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.709 243708 DEBUG nova.network.neutron [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Refreshing network info cache for port ad72d283-b1a5-4889-9e04-0297897b4cad _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.827 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.827 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.828 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.828 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.828 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.829 243708 INFO nova.compute.manager [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Terminating instance
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.830 243708 DEBUG nova.compute.manager [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:19:27 compute-0 kernel: tapad72d283-b1 (unregistering): left promiscuous mode
Dec 13 04:19:27 compute-0 NetworkManager[48899]: <info>  [1765599567.9089] device (tapad72d283-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:19:27 compute-0 ovn_controller[145204]: 2025-12-13T04:19:27Z|00182|binding|INFO|Releasing lport ad72d283-b1a5-4889-9e04-0297897b4cad from this chassis (sb_readonly=0)
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.914 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:27 compute-0 ovn_controller[145204]: 2025-12-13T04:19:27Z|00183|binding|INFO|Setting lport ad72d283-b1a5-4889-9e04-0297897b4cad down in Southbound
Dec 13 04:19:27 compute-0 ovn_controller[145204]: 2025-12-13T04:19:27Z|00184|binding|INFO|Removing iface tapad72d283-b1 ovn-installed in OVS
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.916 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:27.920 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:ad:da 10.100.0.10'], port_security=['fa:16:3e:d3:ad:da 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '5963e695-3cc7-4994-977e-b08fa7a682a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '27927978f9684df1a72cecb32505e93b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'adaa204c-5288-4148-9761-e3b0718cf559', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c25212e-96dc-4c16-8225-64fcdcfdf066, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=ad72d283-b1a5-4889-9e04-0297897b4cad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:19:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:27.922 154842 INFO neutron.agent.ovn.metadata.agent [-] Port ad72d283-b1a5-4889-9e04-0297897b4cad in datapath fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 unbound from our chassis
Dec 13 04:19:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:27.923 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:19:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:27.924 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[35f152c0-5220-4b58-af70-aa19dfc9ce1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:27.925 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 namespace which is not needed anymore
Dec 13 04:19:27 compute-0 nova_compute[243704]: 2025-12-13 04:19:27.939 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:27 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Deactivated successfully.
Dec 13 04:19:27 compute-0 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d00000011.scope: Consumed 16.855s CPU time.
Dec 13 04:19:27 compute-0 systemd-machined[206767]: Machine qemu-17-instance-00000011 terminated.
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.066 243708 INFO nova.virt.libvirt.driver [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Instance destroyed successfully.
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.067 243708 DEBUG nova.objects.instance [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lazy-loading 'resources' on Instance uuid 5963e695-3cc7-4994-977e-b08fa7a682a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:19:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e347 do_prune osdmap full prune enabled
Dec 13 04:19:28 compute-0 ceph-mon[75071]: osdmap e347: 3 total, 3 up, 3 in
Dec 13 04:19:28 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [NOTICE]   (265999) : haproxy version is 2.8.14-c23fe91
Dec 13 04:19:28 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [NOTICE]   (265999) : path to executable is /usr/sbin/haproxy
Dec 13 04:19:28 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [WARNING]  (265999) : Exiting Master process...
Dec 13 04:19:28 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [ALERT]    (265999) : Current worker (266001) exited with code 143 (Terminated)
Dec 13 04:19:28 compute-0 neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0[265995]: [WARNING]  (265999) : All workers exited. Exiting... (0)
Dec 13 04:19:28 compute-0 systemd[1]: libpod-1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6.scope: Deactivated successfully.
Dec 13 04:19:28 compute-0 podman[267632]: 2025-12-13 04:19:28.105053178 +0000 UTC m=+0.087553174 container died 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:19:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e348 e348: 3 total, 3 up, 3 in
Dec 13 04:19:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e348: 3 total, 3 up, 3 in
Dec 13 04:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-d5b17eccb33bcc7fe971c2cdc45e735b7a7c9727fc3cdfb4c2cf16585bc1f5ad-merged.mount: Deactivated successfully.
Dec 13 04:19:28 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6-userdata-shm.mount: Deactivated successfully.
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.230 243708 DEBUG nova.virt.libvirt.vif [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:17:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestVolumeBootPattern-server-2070020833',display_name='tempest-TestVolumeBootPattern-server-2070020833',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testvolumebootpattern-server-2070020833',id=17,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAl9IzBTZodkzRaTJ4ZSTFsBYfrosc/FZH39fAgFtwi0VhMq6gLPcwFTQD8+HXX1aQPbDOgdUqt6++Z2y2Q94vrV9RCyAc6f2Zk6Zd+8+jYrOTdLglT3wVhoPmFMj6cApQ==',key_name='tempest-TestVolumeBootPattern-476071678',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:17:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='27927978f9684df1a72cecb32505e93b',ramdisk_id='',reservation_id='r-xj9k696q',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestVolumeBootPattern-236547311',owner_user_name='tempest-TestVolumeBootPattern-236547311-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:17:56Z,user_data=None,user_id='9b8c4a2342e4420d8140b403edbcba5a',uuid=5963e695-3cc7-4994-977e-b08fa7a682a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.230 243708 DEBUG nova.network.os_vif_util [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converting VIF {"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.231 243708 DEBUG nova.network.os_vif_util [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.232 243708 DEBUG os_vif [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.233 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.234 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad72d283-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.235 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.238 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.242 243708 INFO os_vif [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d3:ad:da,bridge_name='br-int',has_traffic_filtering=True,id=ad72d283-b1a5-4889-9e04-0297897b4cad,network=Network(fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad72d283-b1')
Dec 13 04:19:28 compute-0 podman[267632]: 2025-12-13 04:19:28.304426055 +0000 UTC m=+0.286926041 container cleanup 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:19:28 compute-0 systemd[1]: libpod-conmon-1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6.scope: Deactivated successfully.
Dec 13 04:19:28 compute-0 podman[267691]: 2025-12-13 04:19:28.560445357 +0000 UTC m=+0.229610877 container remove 1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.566 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[21e14797-7576-42e1-a309-aac281d332e5]: (4, ('Sat Dec 13 04:19:28 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6)\n1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6\nSat Dec 13 04:19:28 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 (1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6)\n1dc676dfeca5498357d1be06ef18dc201566501918d6f54126dfeff619fb46e6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.569 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[21e865de-ea21-4f72-b8b7-63024d3f3b27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.570 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc553cd2-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.571 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:28 compute-0 kernel: tapfc553cd2-50: left promiscuous mode
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.588 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.593 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ff223dd4-fd3f-4ad5-a828-33d64d0c9b51]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.622 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf6f537-db88-4337-84a1-b6037ee87c00]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.624 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0c9d17aa-7f6e-44c4-b6cb-98aa3f63be07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.647 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ae26b63c-39a3-49eb-a5f6-e2056fe20ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 421370, 'reachable_time': 23805, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 267705, 'error': None, 'target': 'ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 systemd[1]: run-netns-ovnmeta\x2dfc553cd2\x2d5dd5\x2d4d87\x2d97af\x2d4b4eeb4ca0b0.mount: Deactivated successfully.
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.651 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:19:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:28.652 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[e17139e0-8b6f-4e1f-8794-08d1e50ce3a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.750 243708 INFO nova.virt.libvirt.driver [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Deleting instance files /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1_del
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.751 243708 INFO nova.virt.libvirt.driver [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Deletion of /var/lib/nova/instances/5963e695-3cc7-4994-977e-b08fa7a682a1_del complete
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.838 243708 INFO nova.compute.manager [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Took 1.01 seconds to destroy the instance on the hypervisor.
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.838 243708 DEBUG oslo.service.loopingcall [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.839 243708 DEBUG nova.compute.manager [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:19:28 compute-0 nova_compute[243704]: 2025-12-13 04:19:28.839 243708 DEBUG nova.network.neutron [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:19:29 compute-0 ceph-mon[75071]: pgmap v1410: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 5.0 KiB/s wr, 123 op/s
Dec 13 04:19:29 compute-0 ceph-mon[75071]: osdmap e348: 3 total, 3 up, 3 in
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.592 243708 DEBUG nova.network.neutron [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776251709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3776251709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.610 243708 INFO nova.compute.manager [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Took 0.77 seconds to deallocate network for instance.
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.675 243708 DEBUG nova.compute.manager [req-e170a3f5-b44a-4382-808d-daaeab027c90 req-9c4048e4-ae38-4419-8378-ed80b433c928 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-deleted-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1412: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.2 KiB/s wr, 116 op/s
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.774 243708 DEBUG nova.network.neutron [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updated VIF entry in instance network info cache for port ad72d283-b1a5-4889-9e04-0297897b4cad. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.774 243708 DEBUG nova.network.neutron [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Updating instance_info_cache with network_info: [{"id": "ad72d283-b1a5-4889-9e04-0297897b4cad", "address": "fa:16:3e:d3:ad:da", "network": {"id": "fc553cd2-5dd5-4d87-97af-4b4eeb4ca0b0", "bridge": "br-int", "label": "tempest-TestVolumeBootPattern-380648252-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "27927978f9684df1a72cecb32505e93b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad72d283-b1", "ovs_interfaceid": "ad72d283-b1a5-4889-9e04-0297897b4cad", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.811 243708 DEBUG oslo_concurrency.lockutils [req-a6c3f740-8780-4231-a377-76d452fda475 req-f74be34e-7419-415c-a364-fcc3b4628648 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-5963e695-3cc7-4994-977e-b08fa7a682a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.823 243708 DEBUG nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-unplugged-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.823 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.824 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.824 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.824 243708 DEBUG nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] No waiting events found dispatching network-vif-unplugged-ad72d283-b1a5-4889-9e04-0297897b4cad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.824 243708 DEBUG nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-unplugged-ad72d283-b1a5-4889-9e04-0297897b4cad for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.824 243708 DEBUG nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.825 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.825 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.825 243708 DEBUG oslo_concurrency.lockutils [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.825 243708 DEBUG nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] No waiting events found dispatching network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.825 243708 WARNING nova.compute.manager [req-79a4da63-b951-4c55-87cc-24a7ea2d3495 req-5b520b20-cf22-4e0b-8d5e-b8567100b0ef 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Received unexpected event network-vif-plugged-ad72d283-b1a5-4889-9e04-0297897b4cad for instance with vm_state active and task_state deleting.
Dec 13 04:19:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611552207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1611552207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:29 compute-0 nova_compute[243704]: 2025-12-13 04:19:29.955 243708 INFO nova.compute.manager [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Took 0.34 seconds to detach 1 volumes for instance.
Dec 13 04:19:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3776251709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3776251709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1611552207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1611552207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.513 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.514 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.576 243708 DEBUG oslo_concurrency.processutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1413: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Dec 13 04:19:31 compute-0 ceph-mon[75071]: pgmap v1412: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 6.2 KiB/s wr, 116 op/s
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.919 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599556.9178803, afb36bc5-8bfe-44dc-8be5-f7a657debc98 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.920 243708 INFO nova.compute.manager [-] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] VM Stopped (Lifecycle Event)
Dec 13 04:19:31 compute-0 nova_compute[243704]: 2025-12-13 04:19:31.941 243708 DEBUG nova.compute.manager [None req-7ac08271-ba65-4b71-bc33-eb7710c3f634 - - - - - -] [instance: afb36bc5-8bfe-44dc-8be5-f7a657debc98] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:19:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e348 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e348 do_prune osdmap full prune enabled
Dec 13 04:19:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e349 e349: 3 total, 3 up, 3 in
Dec 13 04:19:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e349: 3 total, 3 up, 3 in
Dec 13 04:19:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3320222712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.184 243708 DEBUG oslo_concurrency.processutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.190 243708 DEBUG nova.compute.provider_tree [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.203 243708 DEBUG nova.scheduler.client.report [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.266 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.336 243708 INFO nova.scheduler.client.report [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Deleted allocations for instance 5963e695-3cc7-4994-977e-b08fa7a682a1
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.539 243708 DEBUG oslo_concurrency.lockutils [None req-28f8c40e-aa55-4429-8d2a-5887f6eb119e 9b8c4a2342e4420d8140b403edbcba5a 27927978f9684df1a72cecb32505e93b - - default default] Lock "5963e695-3cc7-4994-977e-b08fa7a682a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:32 compute-0 nova_compute[243704]: 2025-12-13 04:19:32.682 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:32 compute-0 podman[267729]: 2025-12-13 04:19:32.954331845 +0000 UTC m=+0.096008484 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Dec 13 04:19:33 compute-0 ceph-mon[75071]: pgmap v1413: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 KiB/s wr, 53 op/s
Dec 13 04:19:33 compute-0 ceph-mon[75071]: osdmap e349: 3 total, 3 up, 3 in
Dec 13 04:19:33 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3320222712' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:33 compute-0 nova_compute[243704]: 2025-12-13 04:19:33.236 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1415: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Dec 13 04:19:34 compute-0 ceph-mon[75071]: pgmap v1415: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 56 KiB/s rd, 3.4 KiB/s wr, 76 op/s
Dec 13 04:19:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:35.094 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:35.095 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:19:35.095 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:35 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622370790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:35 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3622370790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.0 KiB/s wr, 87 op/s
Dec 13 04:19:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e349 do_prune osdmap full prune enabled
Dec 13 04:19:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3622370790' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3622370790' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e350 e350: 3 total, 3 up, 3 in
Dec 13 04:19:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e350: 3 total, 3 up, 3 in
Dec 13 04:19:36 compute-0 ceph-mon[75071]: pgmap v1416: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 4.0 KiB/s wr, 87 op/s
Dec 13 04:19:36 compute-0 ceph-mon[75071]: osdmap e350: 3 total, 3 up, 3 in
Dec 13 04:19:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e350 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Dec 13 04:19:37 compute-0 nova_compute[243704]: 2025-12-13 04:19:37.685 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e350 do_prune osdmap full prune enabled
Dec 13 04:19:38 compute-0 nova_compute[243704]: 2025-12-13 04:19:38.238 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e351 e351: 3 total, 3 up, 3 in
Dec 13 04:19:39 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:19:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e351: 3 total, 3 up, 3 in
Dec 13 04:19:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.8 KiB/s wr, 94 op/s
Dec 13 04:19:39 compute-0 ceph-mon[75071]: pgmap v1418: 305 pgs: 305 active+clean; 169 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 1.6 KiB/s wr, 42 op/s
Dec 13 04:19:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:19:40
Dec 13 04:19:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:19:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:19:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['volumes', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'images']
Dec 13 04:19:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:19:40 compute-0 ceph-mon[75071]: osdmap e351: 3 total, 3 up, 3 in
Dec 13 04:19:40 compute-0 ceph-mon[75071]: pgmap v1420: 305 pgs: 305 active+clean; 88 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 69 KiB/s rd, 3.8 KiB/s wr, 94 op/s
Dec 13 04:19:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1421: 305 pgs: 305 active+clean; 88 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Dec 13 04:19:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e351 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:42 compute-0 nova_compute[243704]: 2025-12-13 04:19:42.235 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:19:42 compute-0 nova_compute[243704]: 2025-12-13 04:19:42.401 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:42 compute-0 nova_compute[243704]: 2025-12-13 04:19:42.686 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3969243987' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3969243987' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:19:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:19:43 compute-0 ceph-mon[75071]: pgmap v1421: 305 pgs: 305 active+clean; 88 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Dec 13 04:19:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3969243987' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3969243987' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:43 compute-0 nova_compute[243704]: 2025-12-13 04:19:43.066 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599568.064643, 5963e695-3cc7-4994-977e-b08fa7a682a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:19:43 compute-0 nova_compute[243704]: 2025-12-13 04:19:43.066 243708 INFO nova.compute.manager [-] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] VM Stopped (Lifecycle Event)
Dec 13 04:19:43 compute-0 nova_compute[243704]: 2025-12-13 04:19:43.080 243708 DEBUG nova.compute.manager [None req-49243c64-790c-4de8-96b4-5ff32028a472 - - - - - -] [instance: 5963e695-3cc7-4994-977e-b08fa7a682a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:19:43 compute-0 nova_compute[243704]: 2025-12-13 04:19:43.241 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.4 KiB/s wr, 65 op/s
Dec 13 04:19:43 compute-0 podman[267758]: 2025-12-13 04:19:43.906888958 +0000 UTC m=+0.057073128 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:19:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e351 do_prune osdmap full prune enabled
Dec 13 04:19:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e352 e352: 3 total, 3 up, 3 in
Dec 13 04:19:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e352: 3 total, 3 up, 3 in
Dec 13 04:19:45 compute-0 ceph-mon[75071]: pgmap v1422: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 48 KiB/s rd, 2.4 KiB/s wr, 65 op/s
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.330 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "69956991-d6ec-4e9d-b09e-977f6e49d135" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.331 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.352 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.438 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.438 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.447 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.447 243708 INFO nova.compute.claims [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:19:45 compute-0 nova_compute[243704]: 2025-12-13 04:19:45.539 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3158326134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3158326134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 71 op/s
Dec 13 04:19:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e352 do_prune osdmap full prune enabled
Dec 13 04:19:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e353 e353: 3 total, 3 up, 3 in
Dec 13 04:19:46 compute-0 ceph-mon[75071]: osdmap e352: 3 total, 3 up, 3 in
Dec 13 04:19:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3158326134' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3158326134' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:46 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e353: 3 total, 3 up, 3 in
Dec 13 04:19:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1661001222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.195 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.203 243708 DEBUG nova.compute.provider_tree [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.227 243708 DEBUG nova.scheduler.client.report [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.256 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.817s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.257 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.306 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.307 243708 DEBUG nova.network.neutron [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.325 243708 INFO nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.340 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.432 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.434 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.435 243708 INFO nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Creating image(s)
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.464 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.494 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.531 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.537 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.605 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.606 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.607 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.608 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.637 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.643 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 69956991-d6ec-4e9d-b09e-977f6e49d135_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.869 243708 DEBUG nova.network.neutron [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.870 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:19:46 compute-0 nova_compute[243704]: 2025-12-13 04:19:46.995 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 69956991-d6ec-4e9d-b09e-977f6e49d135_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e353 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e353 do_prune osdmap full prune enabled
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e354 e354: 3 total, 3 up, 3 in
Dec 13 04:19:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e354: 3 total, 3 up, 3 in
Dec 13 04:19:47 compute-0 ceph-mon[75071]: pgmap v1424: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 3.1 KiB/s wr, 71 op/s
Dec 13 04:19:47 compute-0 ceph-mon[75071]: osdmap e353: 3 total, 3 up, 3 in
Dec 13 04:19:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1661001222' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:47 compute-0 ceph-mon[75071]: osdmap e354: 3 total, 3 up, 3 in
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.076 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] resizing rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.146 243708 DEBUG nova.objects.instance [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lazy-loading 'migration_context' on Instance uuid 69956991-d6ec-4e9d-b09e-977f6e49d135 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.163 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.163 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Ensure instance console log exists: /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.164 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.164 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.165 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.166 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.172 243708 WARNING nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.175 243708 DEBUG nova.virt.libvirt.host [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.176 243708 DEBUG nova.virt.libvirt.host [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.180 243708 DEBUG nova.virt.libvirt.host [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.181 243708 DEBUG nova.virt.libvirt.host [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.182 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.183 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.183 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.184 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.184 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.184 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.185 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.185 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.185 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.186 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.186 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.186 243708 DEBUG nova.virt.hardware [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.190 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1553598009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1553598009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.690 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:19:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2837619057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.743 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.768 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:47 compute-0 nova_compute[243704]: 2025-12-13 04:19:47.773 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.245 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:19:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1756033810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:19:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1553598009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1553598009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:48 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2837619057' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.391 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.618s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.393 243708 DEBUG nova.objects.instance [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lazy-loading 'pci_devices' on Instance uuid 69956991-d6ec-4e9d-b09e-977f6e49d135 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.406 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <uuid>69956991-d6ec-4e9d-b09e-977f6e49d135</uuid>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <name>instance-00000013</name>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesNegativeTest-instance-539585160</nova:name>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:19:47</nova:creationTime>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:user uuid="01aae777e06a403fb24ec8dab8970567">tempest-VolumesNegativeTest-1377464268-project-member</nova:user>
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <nova:project uuid="984f468d8f354b088cdd4fbc8e736a65">tempest-VolumesNegativeTest-1377464268</nova:project>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <nova:ports/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <system>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="serial">69956991-d6ec-4e9d-b09e-977f6e49d135</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="uuid">69956991-d6ec-4e9d-b09e-977f6e49d135</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </system>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <os>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </os>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <features>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </features>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/69956991-d6ec-4e9d-b09e-977f6e49d135_disk">
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </source>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config">
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </source>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:19:48 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/console.log" append="off"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <video>
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </video>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:19:48 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:19:48 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:19:48 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:19:48 compute-0 nova_compute[243704]: </domain>
Dec 13 04:19:48 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.471 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.472 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.472 243708 INFO nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Using config drive
Dec 13 04:19:48 compute-0 nova_compute[243704]: 2025-12-13 04:19:48.501 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:48 compute-0 podman[268031]: 2025-12-13 04:19:48.520491504 +0000 UTC m=+0.072982510 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Dec 13 04:19:49 compute-0 nova_compute[243704]: 2025-12-13 04:19:49.541 243708 INFO nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Creating config drive at /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config
Dec 13 04:19:49 compute-0 nova_compute[243704]: 2025-12-13 04:19:49.550 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1vu5vrdj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1428: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 3.5 MiB/s wr, 121 op/s
Dec 13 04:19:49 compute-0 nova_compute[243704]: 2025-12-13 04:19:49.710 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1vu5vrdj" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:49 compute-0 nova_compute[243704]: 2025-12-13 04:19:49.795 243708 DEBUG nova.storage.rbd_utils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] rbd image 69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:19:49 compute-0 nova_compute[243704]: 2025-12-13 04:19:49.801 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config 69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:49 compute-0 ceph-mon[75071]: pgmap v1427: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Dec 13 04:19:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1756033810' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:19:51 compute-0 ceph-mon[75071]: pgmap v1428: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 3.5 MiB/s wr, 121 op/s
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.250 243708 DEBUG oslo_concurrency.processutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config 69956991-d6ec-4e9d-b09e-977f6e49d135_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.251 243708 INFO nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Deleting local config drive /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135/disk.config because it was imported into RBD.
Dec 13 04:19:51 compute-0 systemd-machined[206767]: New machine qemu-19-instance-00000013.
Dec 13 04:19:51 compute-0 systemd[1]: Started Virtual Machine qemu-19-instance-00000013.
Dec 13 04:19:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 3.2 MiB/s wr, 103 op/s
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.938 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.940 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.941 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599591.936871, 69956991-d6ec-4e9d-b09e-977f6e49d135 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.941 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] VM Resumed (Lifecycle Event)
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.950 243708 INFO nova.virt.libvirt.driver [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance spawned successfully.
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.950 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.967 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.974 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.977 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.977 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.978 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.978 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.979 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:51 compute-0 nova_compute[243704]: 2025-12-13 04:19:51.979 243708 DEBUG nova.virt.libvirt.driver [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.000 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.000 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599591.9388795, 69956991-d6ec-4e9d-b09e-977f6e49d135 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.001 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] VM Started (Lifecycle Event)
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.016 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.021 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:19:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e354 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e354 do_prune osdmap full prune enabled
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.039 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:19:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e355 e355: 3 total, 3 up, 3 in
Dec 13 04:19:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e355: 3 total, 3 up, 3 in
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.092 243708 INFO nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Took 5.66 seconds to spawn the instance on the hypervisor.
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.093 243708 DEBUG nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.171 243708 INFO nova.compute.manager [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Took 6.76 seconds to build instance.
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.190 243708 DEBUG oslo_concurrency.lockutils [None req-aa9b7f22-5d49-4472-be18-00a946afb298 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003479925102109189 of space, bias 1.0, pg target 0.10439775306327567 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.00035847827016884223 of space, bias 1.0, pg target 0.10754348105065267 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.0092397010317846e-06 of space, bias 1.0, pg target 0.0006027719103095354 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665470249727228 of space, bias 1.0, pg target 0.19996410749181684 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.1827157501163075e-06 of space, bias 4.0, pg target 0.0014192589001395688 quantized to 16 (current 16)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:19:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:19:52 compute-0 nova_compute[243704]: 2025-12-13 04:19:52.691 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.248 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:53 compute-0 ceph-mon[75071]: pgmap v1429: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 3.2 MiB/s wr, 103 op/s
Dec 13 04:19:53 compute-0 ceph-mon[75071]: osdmap e355: 3 total, 3 up, 3 in
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.492 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "69956991-d6ec-4e9d-b09e-977f6e49d135" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.492 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.493 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "69956991-d6ec-4e9d-b09e-977f6e49d135-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.493 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.493 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.495 243708 INFO nova.compute.manager [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Terminating instance
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.495 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "refresh_cache-69956991-d6ec-4e9d-b09e-977f6e49d135" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.496 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquired lock "refresh_cache-69956991-d6ec-4e9d-b09e-977f6e49d135" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.496 243708 DEBUG nova.network.neutron [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.639 243708 DEBUG nova.network.neutron [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:19:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 164 op/s
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.825 243708 DEBUG nova.network.neutron [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.838 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Releasing lock "refresh_cache-69956991-d6ec-4e9d-b09e-977f6e49d135" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:19:53 compute-0 nova_compute[243704]: 2025-12-13 04:19:53.839 243708 DEBUG nova.compute.manager [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:19:53 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Deactivated successfully.
Dec 13 04:19:53 compute-0 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000013.scope: Consumed 2.483s CPU time.
Dec 13 04:19:53 compute-0 systemd-machined[206767]: Machine qemu-19-instance-00000013 terminated.
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.060 243708 INFO nova.virt.libvirt.driver [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance destroyed successfully.
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.061 243708 DEBUG nova.objects.instance [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lazy-loading 'resources' on Instance uuid 69956991-d6ec-4e9d-b09e-977f6e49d135 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:19:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e355 do_prune osdmap full prune enabled
Dec 13 04:19:54 compute-0 ceph-mon[75071]: pgmap v1431: 305 pgs: 305 active+clean; 134 MiB data, 369 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.8 MiB/s wr, 164 op/s
Dec 13 04:19:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e356 e356: 3 total, 3 up, 3 in
Dec 13 04:19:54 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e356: 3 total, 3 up, 3 in
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.384 243708 INFO nova.virt.libvirt.driver [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Deleting instance files /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135_del
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.385 243708 INFO nova.virt.libvirt.driver [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Deletion of /var/lib/nova/instances/69956991-d6ec-4e9d-b09e-977f6e49d135_del complete
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.434 243708 INFO nova.compute.manager [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Took 0.59 seconds to destroy the instance on the hypervisor.
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.434 243708 DEBUG oslo.service.loopingcall [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.434 243708 DEBUG nova.compute.manager [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.435 243708 DEBUG nova.network.neutron [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.579 243708 DEBUG nova.network.neutron [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.588 243708 DEBUG nova.network.neutron [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.601 243708 INFO nova.compute.manager [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Took 0.17 seconds to deallocate network for instance.
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.639 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.640 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:19:54 compute-0 nova_compute[243704]: 2025-12-13 04:19:54.688 243708 DEBUG oslo_concurrency.processutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:19:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:19:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3372702169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.247 243708 DEBUG oslo_concurrency.processutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.254 243708 DEBUG nova.compute.provider_tree [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:19:55 compute-0 ceph-mon[75071]: osdmap e356: 3 total, 3 up, 3 in
Dec 13 04:19:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3372702169' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.302 243708 DEBUG nova.scheduler.client.report [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.320 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.341 243708 INFO nova.scheduler.client.report [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Deleted allocations for instance 69956991-d6ec-4e9d-b09e-977f6e49d135
Dec 13 04:19:55 compute-0 nova_compute[243704]: 2025-12-13 04:19:55.389 243708 DEBUG oslo_concurrency.lockutils [None req-5d56c55c-8582-4d2f-8840-689514a1cca2 01aae777e06a403fb24ec8dab8970567 984f468d8f354b088cdd4fbc8e736a65 - - default default] Lock "69956991-d6ec-4e9d-b09e-977f6e49d135" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:19:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1433: 305 pgs: 305 active+clean; 125 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Dec 13 04:19:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e356 do_prune osdmap full prune enabled
Dec 13 04:19:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e357 e357: 3 total, 3 up, 3 in
Dec 13 04:19:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e357: 3 total, 3 up, 3 in
Dec 13 04:19:56 compute-0 ceph-mon[75071]: pgmap v1433: 305 pgs: 305 active+clean; 125 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 221 op/s
Dec 13 04:19:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2109517563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2109517563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e357 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:19:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e357 do_prune osdmap full prune enabled
Dec 13 04:19:57 compute-0 ceph-mon[75071]: osdmap e357: 3 total, 3 up, 3 in
Dec 13 04:19:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2109517563' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2109517563' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 125 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 181 op/s
Dec 13 04:19:57 compute-0 nova_compute[243704]: 2025-12-13 04:19:57.694 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:58 compute-0 nova_compute[243704]: 2025-12-13 04:19:58.251 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:19:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e358 e358: 3 total, 3 up, 3 in
Dec 13 04:19:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e358: 3 total, 3 up, 3 in
Dec 13 04:19:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558410754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3558410754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:58 compute-0 ceph-mon[75071]: pgmap v1435: 305 pgs: 305 active+clean; 125 MiB data, 368 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 26 KiB/s wr, 181 op/s
Dec 13 04:19:58 compute-0 ceph-mon[75071]: osdmap e358: 3 total, 3 up, 3 in
Dec 13 04:19:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3558410754' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3558410754' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:19:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1931275454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:19:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1931275454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1931275454' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:19:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1931275454' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:19:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 265 op/s
Dec 13 04:20:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104594122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3104594122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e358 do_prune osdmap full prune enabled
Dec 13 04:20:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e359 e359: 3 total, 3 up, 3 in
Dec 13 04:20:00 compute-0 ceph-mon[75071]: pgmap v1437: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 14 KiB/s wr, 265 op/s
Dec 13 04:20:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3104594122' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3104594122' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e359: 3 total, 3 up, 3 in
Dec 13 04:20:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 12 KiB/s wr, 176 op/s
Dec 13 04:20:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048672389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4048672389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e359 do_prune osdmap full prune enabled
Dec 13 04:20:02 compute-0 sudo[268210]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:20:02 compute-0 sudo[268210]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:02 compute-0 sudo[268210]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:02 compute-0 nova_compute[243704]: 2025-12-13 04:20:02.696 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:02 compute-0 sudo[268235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:20:02 compute-0 sudo[268235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:02 compute-0 ceph-mon[75071]: osdmap e359: 3 total, 3 up, 3 in
Dec 13 04:20:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4048672389' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4048672389' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:03 compute-0 nova_compute[243704]: 2025-12-13 04:20:03.254 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:03 compute-0 sudo[268235]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e360 e360: 3 total, 3 up, 3 in
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e360: 3 total, 3 up, 3 in
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 15 KiB/s wr, 276 op/s
Dec 13 04:20:03 compute-0 sudo[268291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:20:03 compute-0 sudo[268291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:03 compute-0 sudo[268291]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:03 compute-0 sudo[268321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:20:03 compute-0 sudo[268321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:03 compute-0 podman[268315]: 2025-12-13 04:20:03.90179779 +0000 UTC m=+0.161800239 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Dec 13 04:20:03 compute-0 ceph-mon[75071]: pgmap v1439: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 124 KiB/s rd, 12 KiB/s wr, 176 op/s
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: osdmap e360: 3 total, 3 up, 3 in
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:20:03 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:20:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/281682291' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/281682291' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:04 compute-0 podman[268380]: 2025-12-13 04:20:04.091318228 +0000 UTC m=+0.027649800 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e360 do_prune osdmap full prune enabled
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.219305983 +0000 UTC m=+1.155637465 container create cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:20:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e361 e361: 3 total, 3 up, 3 in
Dec 13 04:20:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e361: 3 total, 3 up, 3 in
Dec 13 04:20:05 compute-0 ceph-mon[75071]: pgmap v1441: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 198 KiB/s rd, 15 KiB/s wr, 276 op/s
Dec 13 04:20:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/281682291' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/281682291' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:05 compute-0 systemd[1]: Started libpod-conmon-cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24.scope.
Dec 13 04:20:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.640621317 +0000 UTC m=+1.576952839 container init cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.651583694 +0000 UTC m=+1.587915206 container start cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:20:05 compute-0 intelligent_mendeleev[268396]: 167 167
Dec 13 04:20:05 compute-0 systemd[1]: libpod-cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24.scope: Deactivated successfully.
Dec 13 04:20:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.2 KiB/s wr, 163 op/s
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.704843438 +0000 UTC m=+1.641175000 container attach cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.706499423 +0000 UTC m=+1.642830915 container died cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:20:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e5ccb1dec84d23931ca7e03475e8db210d3c22938d20c6a2492b22c88502a0-merged.mount: Deactivated successfully.
Dec 13 04:20:05 compute-0 podman[268380]: 2025-12-13 04:20:05.836249951 +0000 UTC m=+1.772581423 container remove cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mendeleev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:20:05 compute-0 systemd[1]: libpod-conmon-cfdf030003e4a91588aaf7241b9879999433d4914ce45d9cf0f0141065686f24.scope: Deactivated successfully.
Dec 13 04:20:06 compute-0 podman[268421]: 2025-12-13 04:20:06.031516316 +0000 UTC m=+0.026201701 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:06 compute-0 podman[268421]: 2025-12-13 04:20:06.214921238 +0000 UTC m=+0.209606623 container create f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:06 compute-0 systemd[1]: Started libpod-conmon-f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a.scope.
Dec 13 04:20:06 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:06 compute-0 podman[268421]: 2025-12-13 04:20:06.325514488 +0000 UTC m=+0.320199873 container init f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:20:06 compute-0 podman[268421]: 2025-12-13 04:20:06.335528118 +0000 UTC m=+0.330213483 container start f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:06 compute-0 podman[268421]: 2025-12-13 04:20:06.338417628 +0000 UTC m=+0.333102993 container attach f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:20:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e361 do_prune osdmap full prune enabled
Dec 13 04:20:06 compute-0 ceph-mon[75071]: osdmap e361: 3 total, 3 up, 3 in
Dec 13 04:20:06 compute-0 ceph-mon[75071]: pgmap v1443: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 118 KiB/s rd, 8.2 KiB/s wr, 163 op/s
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.539083) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606539191, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2761, "num_deletes": 524, "total_data_size": 3633267, "memory_usage": 3690608, "flush_reason": "Manual Compaction"}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Dec 13 04:20:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e362 e362: 3 total, 3 up, 3 in
Dec 13 04:20:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e362: 3 total, 3 up, 3 in
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606569293, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3534172, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26487, "largest_seqno": 29247, "table_properties": {"data_size": 3521673, "index_size": 7841, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 30224, "raw_average_key_size": 21, "raw_value_size": 3494500, "raw_average_value_size": 2454, "num_data_blocks": 337, "num_entries": 1424, "num_filter_entries": 1424, "num_deletions": 524, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599448, "oldest_key_time": 1765599448, "file_creation_time": 1765599606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 30315 microseconds, and 11131 cpu microseconds.
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.569390) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3534172 bytes OK
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.569451) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.572506) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.572528) EVENT_LOG_v1 {"time_micros": 1765599606572521, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.572561) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3620244, prev total WAL file size 3620285, number of live WAL files 2.
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.574863) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3451KB)], [59(10MB)]
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606575102, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 14697695, "oldest_snapshot_seqno": -1}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5907 keys, 9692520 bytes, temperature: kUnknown
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606692505, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 9692520, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9648598, "index_size": 28068, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14789, "raw_key_size": 147405, "raw_average_key_size": 24, "raw_value_size": 9537897, "raw_average_value_size": 1614, "num_data_blocks": 1136, "num_entries": 5907, "num_filter_entries": 5907, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.692849) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 9692520 bytes
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.696006) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 125.0 rd, 82.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.6 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(6.9) write-amplify(2.7) OK, records in: 6950, records dropped: 1043 output_compression: NoCompression
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.696117) EVENT_LOG_v1 {"time_micros": 1765599606696078, "job": 32, "event": "compaction_finished", "compaction_time_micros": 117537, "compaction_time_cpu_micros": 53000, "output_level": 6, "num_output_files": 1, "total_output_size": 9692520, "num_input_records": 6950, "num_output_records": 5907, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606697293, "job": 32, "event": "table_file_deletion", "file_number": 61}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599606699900, "job": 32, "event": "table_file_deletion", "file_number": 59}
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.574351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.699972) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.699979) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.699981) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.699984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:20:06.699986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:20:06 compute-0 goofy_shaw[268437]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:20:06 compute-0 goofy_shaw[268437]: --> All data devices are unavailable
Dec 13 04:20:06 compute-0 systemd[1]: libpod-f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a.scope: Deactivated successfully.
Dec 13 04:20:06 compute-0 podman[268457]: 2025-12-13 04:20:06.884983478 +0000 UTC m=+0.027827146 container died f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-4e5d554f4ccfe2f3159a14a2301c69f9b0633717441d78b190cad9539a471c12-merged.mount: Deactivated successfully.
Dec 13 04:20:06 compute-0 podman[268457]: 2025-12-13 04:20:06.95294611 +0000 UTC m=+0.095789748 container remove f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:20:06 compute-0 systemd[1]: libpod-conmon-f399f5154b42b5817732cfde395df6ed1f9020df693048db44241179aa27238a.scope: Deactivated successfully.
Dec 13 04:20:07 compute-0 sudo[268321]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:07 compute-0 sudo[268472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:20:07 compute-0 sudo[268472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:07 compute-0 sudo[268472]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:07 compute-0 sudo[268497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:20:07 compute-0 sudo[268497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e362 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e362 do_prune osdmap full prune enabled
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e363 e363: 3 total, 3 up, 3 in
Dec 13 04:20:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e363: 3 total, 3 up, 3 in
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.449851723 +0000 UTC m=+0.044401745 container create d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Dec 13 04:20:07 compute-0 systemd[1]: Started libpod-conmon-d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705.scope.
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.432445611 +0000 UTC m=+0.026995643 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:07 compute-0 ceph-mon[75071]: osdmap e362: 3 total, 3 up, 3 in
Dec 13 04:20:07 compute-0 ceph-mon[75071]: osdmap e363: 3 total, 3 up, 3 in
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.553490823 +0000 UTC m=+0.148040885 container init d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.563013192 +0000 UTC m=+0.157563214 container start d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.567441612 +0000 UTC m=+0.161991714 container attach d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:20:07 compute-0 vibrant_snyder[268550]: 167 167
Dec 13 04:20:07 compute-0 systemd[1]: libpod-d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705.scope: Deactivated successfully.
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.570748551 +0000 UTC m=+0.165298573 container died d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:20:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-d3b15079c9000d49d1a5a3a2faef078d9ff7492d9e62ddfd83908f8ab3f59f58-merged.mount: Deactivated successfully.
Dec 13 04:20:07 compute-0 podman[268534]: 2025-12-13 04:20:07.610485889 +0000 UTC m=+0.205035911 container remove d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_snyder, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:20:07 compute-0 systemd[1]: libpod-conmon-d26ac49e363b66fd37dd95a27b77dc58efcdd8bdde0ca632a5d8e4f681906705.scope: Deactivated successfully.
Dec 13 04:20:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 12 KiB/s wr, 238 op/s
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949577823' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3949577823' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:07 compute-0 nova_compute[243704]: 2025-12-13 04:20:07.699 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:07 compute-0 podman[268573]: 2025-12-13 04:20:07.796542274 +0000 UTC m=+0.047273703 container create 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:20:07 compute-0 systemd[1]: Started libpod-conmon-0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2.scope.
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699866006' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1699866006' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:07 compute-0 podman[268573]: 2025-12-13 04:20:07.778948827 +0000 UTC m=+0.029680276 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:07 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf60da5882f091288d83d0d017f11293f57bfb3fd3735d419aae6592deaa711e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf60da5882f091288d83d0d017f11293f57bfb3fd3735d419aae6592deaa711e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf60da5882f091288d83d0d017f11293f57bfb3fd3735d419aae6592deaa711e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf60da5882f091288d83d0d017f11293f57bfb3fd3735d419aae6592deaa711e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:07 compute-0 podman[268573]: 2025-12-13 04:20:07.894532531 +0000 UTC m=+0.145264000 container init 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:20:07 compute-0 podman[268573]: 2025-12-13 04:20:07.903122373 +0000 UTC m=+0.153853812 container start 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:20:07 compute-0 podman[268573]: 2025-12-13 04:20:07.906909486 +0000 UTC m=+0.157640925 container attach 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:20:08 compute-0 sharp_meitner[268589]: {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     "0": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "devices": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "/dev/loop3"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             ],
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_name": "ceph_lv0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_size": "21470642176",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "name": "ceph_lv0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "tags": {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_name": "ceph",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.crush_device_class": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.encrypted": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.objectstore": "bluestore",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_id": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.vdo": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.with_tpm": "0"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             },
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "vg_name": "ceph_vg0"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         }
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     ],
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     "1": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "devices": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "/dev/loop4"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             ],
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_name": "ceph_lv1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_size": "21470642176",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "name": "ceph_lv1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "tags": {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_name": "ceph",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.crush_device_class": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.encrypted": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.objectstore": "bluestore",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_id": "1",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.vdo": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.with_tpm": "0"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             },
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "vg_name": "ceph_vg1"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         }
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     ],
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     "2": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "devices": [
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "/dev/loop5"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             ],
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_name": "ceph_lv2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_size": "21470642176",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "name": "ceph_lv2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "tags": {
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.cluster_name": "ceph",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.crush_device_class": "",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.encrypted": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.objectstore": "bluestore",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osd_id": "2",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.vdo": "0",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:                 "ceph.with_tpm": "0"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             },
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "type": "block",
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:             "vg_name": "ceph_vg2"
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:         }
Dec 13 04:20:08 compute-0 sharp_meitner[268589]:     ]
Dec 13 04:20:08 compute-0 sharp_meitner[268589]: }
Dec 13 04:20:08 compute-0 systemd[1]: libpod-0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2.scope: Deactivated successfully.
Dec 13 04:20:08 compute-0 podman[268573]: 2025-12-13 04:20:08.195195253 +0000 UTC m=+0.445926682 container died 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:20:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-bf60da5882f091288d83d0d017f11293f57bfb3fd3735d419aae6592deaa711e-merged.mount: Deactivated successfully.
Dec 13 04:20:08 compute-0 podman[268573]: 2025-12-13 04:20:08.233499812 +0000 UTC m=+0.484231241 container remove 0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_meitner, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:20:08 compute-0 systemd[1]: libpod-conmon-0f5fdb21a47f9f4858b897726c2715814ec170adb01c8f9dd7db47cd22bdddd2.scope: Deactivated successfully.
Dec 13 04:20:08 compute-0 nova_compute[243704]: 2025-12-13 04:20:08.256 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:08 compute-0 sudo[268497]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:08 compute-0 sudo[268608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:20:08 compute-0 sudo[268608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:08 compute-0 sudo[268608]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:08 compute-0 sudo[268633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:20:08 compute-0 sudo[268633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:08 compute-0 ceph-mon[75071]: pgmap v1446: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 172 KiB/s rd, 12 KiB/s wr, 238 op/s
Dec 13 04:20:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3949577823' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3949577823' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1699866006' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1699866006' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:08 compute-0 podman[268670]: 2025-12-13 04:20:08.650887459 +0000 UTC m=+0.019829039 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:08 compute-0 podman[268670]: 2025-12-13 04:20:08.830014176 +0000 UTC m=+0.198955716 container create 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:20:08 compute-0 systemd[1]: Started libpod-conmon-7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2.scope.
Dec 13 04:20:08 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:09 compute-0 podman[268670]: 2025-12-13 04:20:09.054470461 +0000 UTC m=+0.423412061 container init 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:20:09 compute-0 nova_compute[243704]: 2025-12-13 04:20:09.058 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599594.0574317, 69956991-d6ec-4e9d-b09e-977f6e49d135 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:20:09 compute-0 nova_compute[243704]: 2025-12-13 04:20:09.059 243708 INFO nova.compute.manager [-] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] VM Stopped (Lifecycle Event)
Dec 13 04:20:09 compute-0 podman[268670]: 2025-12-13 04:20:09.066168518 +0000 UTC m=+0.435110068 container start 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:20:09 compute-0 zealous_cerf[268687]: 167 167
Dec 13 04:20:09 compute-0 systemd[1]: libpod-7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2.scope: Deactivated successfully.
Dec 13 04:20:09 compute-0 nova_compute[243704]: 2025-12-13 04:20:09.078 243708 DEBUG nova.compute.manager [None req-f7efa439-d112-45e7-b1d5-3bd7f15001ab - - - - - -] [instance: 69956991-d6ec-4e9d-b09e-977f6e49d135] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:20:09 compute-0 podman[268670]: 2025-12-13 04:20:09.227508764 +0000 UTC m=+0.596450414 container attach 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:20:09 compute-0 podman[268670]: 2025-12-13 04:20:09.228630924 +0000 UTC m=+0.597572514 container died 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:20:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-c04c24655ccd148abed7e07464486c1cf0a08a0e83e46b1c14faa1cb732ae96d-merged.mount: Deactivated successfully.
Dec 13 04:20:09 compute-0 podman[268670]: 2025-12-13 04:20:09.319762905 +0000 UTC m=+0.688704455 container remove 7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:20:09 compute-0 systemd[1]: libpod-conmon-7b0d95f24d990466d7ec6eda8e8e7aa1a8dc91fc948ca4b147c7c0dbdb3479d2.scope: Deactivated successfully.
Dec 13 04:20:09 compute-0 podman[268712]: 2025-12-13 04:20:09.47008635 +0000 UTC m=+0.039349398 container create 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:20:09 compute-0 systemd[1]: Started libpod-conmon-4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79.scope.
Dec 13 04:20:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e8b88be0201fa9b4da9882b4843bed8ca40205acb7b456415310b528026574/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e8b88be0201fa9b4da9882b4843bed8ca40205acb7b456415310b528026574/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e8b88be0201fa9b4da9882b4843bed8ca40205acb7b456415310b528026574/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5e8b88be0201fa9b4da9882b4843bed8ca40205acb7b456415310b528026574/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:20:09 compute-0 podman[268712]: 2025-12-13 04:20:09.455953917 +0000 UTC m=+0.025216985 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:20:09 compute-0 podman[268712]: 2025-12-13 04:20:09.58879262 +0000 UTC m=+0.158055708 container init 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:20:09 compute-0 podman[268712]: 2025-12-13 04:20:09.598635356 +0000 UTC m=+0.167898404 container start 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:20:09 compute-0 podman[268712]: 2025-12-13 04:20:09.604768632 +0000 UTC m=+0.174031720 container attach 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:20:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 10 KiB/s wr, 201 op/s
Dec 13 04:20:10 compute-0 lvm[268807]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:20:10 compute-0 lvm[268808]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:20:10 compute-0 lvm[268807]: VG ceph_vg0 finished
Dec 13 04:20:10 compute-0 lvm[268808]: VG ceph_vg1 finished
Dec 13 04:20:10 compute-0 lvm[268810]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:20:10 compute-0 lvm[268810]: VG ceph_vg2 finished
Dec 13 04:20:10 compute-0 cool_buck[268729]: {}
Dec 13 04:20:10 compute-0 systemd[1]: libpod-4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79.scope: Deactivated successfully.
Dec 13 04:20:10 compute-0 podman[268712]: 2025-12-13 04:20:10.416993375 +0000 UTC m=+0.986256433 container died 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:20:10 compute-0 systemd[1]: libpod-4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79.scope: Consumed 1.424s CPU time.
Dec 13 04:20:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5e8b88be0201fa9b4da9882b4843bed8ca40205acb7b456415310b528026574-merged.mount: Deactivated successfully.
Dec 13 04:20:10 compute-0 podman[268712]: 2025-12-13 04:20:10.625278653 +0000 UTC m=+1.194541701 container remove 4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_buck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:20:10 compute-0 systemd[1]: libpod-conmon-4fdd5d44ffb7c05b4b25e2cda0e8891bfa751cabb20fec02665a7fe25a470b79.scope: Deactivated successfully.
Dec 13 04:20:10 compute-0 sudo[268633]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:20:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:20:10 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640528258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1640528258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:10 compute-0 sudo[268827]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:20:10 compute-0 sudo[268827]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:20:10 compute-0 sudo[268827]: pam_unix(sudo:session): session closed for user root
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e363 do_prune osdmap full prune enabled
Dec 13 04:20:10 compute-0 ceph-mon[75071]: pgmap v1447: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 148 KiB/s rd, 10 KiB/s wr, 201 op/s
Dec 13 04:20:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:10 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:20:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1640528258' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1640528258' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e364 e364: 3 total, 3 up, 3 in
Dec 13 04:20:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e364: 3 total, 3 up, 3 in
Dec 13 04:20:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 4.5 KiB/s wr, 137 op/s
Dec 13 04:20:11 compute-0 ceph-mon[75071]: osdmap e364: 3 total, 3 up, 3 in
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e364 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e364 do_prune osdmap full prune enabled
Dec 13 04:20:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e365 e365: 3 total, 3 up, 3 in
Dec 13 04:20:12 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e365: 3 total, 3 up, 3 in
Dec 13 04:20:12 compute-0 nova_compute[243704]: 2025-12-13 04:20:12.700 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:13 compute-0 nova_compute[243704]: 2025-12-13 04:20:13.259 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:13 compute-0 ceph-mon[75071]: pgmap v1449: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 4.5 KiB/s wr, 137 op/s
Dec 13 04:20:13 compute-0 ceph-mon[75071]: osdmap e365: 3 total, 3 up, 3 in
Dec 13 04:20:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.4 KiB/s wr, 136 op/s
Dec 13 04:20:14 compute-0 ceph-mon[75071]: pgmap v1451: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 103 KiB/s rd, 4.4 KiB/s wr, 136 op/s
Dec 13 04:20:14 compute-0 podman[268852]: 2025-12-13 04:20:14.987945977 +0000 UTC m=+0.113106757 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 04:20:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.5 KiB/s wr, 148 op/s
Dec 13 04:20:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593817390' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/593817390' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:15 compute-0 nova_compute[243704]: 2025-12-13 04:20:15.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:15 compute-0 nova_compute[243704]: 2025-12-13 04:20:15.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:15 compute-0 nova_compute[243704]: 2025-12-13 04:20:15.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:20:15 compute-0 nova_compute[243704]: 2025-12-13 04:20:15.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:20:15 compute-0 nova_compute[243704]: 2025-12-13 04:20:15.894 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:20:16 compute-0 ovn_controller[145204]: 2025-12-13T04:20:16Z|00185|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.904 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.905 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.905 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.906 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:20:16 compute-0 nova_compute[243704]: 2025-12-13 04:20:16.906 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:20:17 compute-0 ceph-mon[75071]: pgmap v1452: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 5.5 KiB/s wr, 148 op/s
Dec 13 04:20:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/593817390' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/593817390' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e365 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e365 do_prune osdmap full prune enabled
Dec 13 04:20:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e366 e366: 3 total, 3 up, 3 in
Dec 13 04:20:17 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e366: 3 total, 3 up, 3 in
Dec 13 04:20:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:20:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1386071718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.547 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.641s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.701 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.730 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.731 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4431MB free_disk=59.98817210458219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.731 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.731 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.799 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.800 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.822 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:20:17 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:17.968 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:20:17 compute-0 nova_compute[243704]: 2025-12-13 04:20:17.969 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:17 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:17.969 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.261 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:20:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882217903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.345 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.350 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.366 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.501 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:20:18 compute-0 nova_compute[243704]: 2025-12-13 04:20:18.501 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:20:18 compute-0 ceph-mon[75071]: osdmap e366: 3 total, 3 up, 3 in
Dec 13 04:20:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1386071718' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:18 compute-0 ceph-mon[75071]: pgmap v1454: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.5 KiB/s wr, 53 op/s
Dec 13 04:20:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1882217903' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:20:18 compute-0 podman[268918]: 2025-12-13 04:20:18.925430191 +0000 UTC m=+0.073678949 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec 13 04:20:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790817240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1790817240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:19 compute-0 nova_compute[243704]: 2025-12-13 04:20:19.501 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:19 compute-0 nova_compute[243704]: 2025-12-13 04:20:19.502 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:19 compute-0 nova_compute[243704]: 2025-12-13 04:20:19.502 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:19 compute-0 nova_compute[243704]: 2025-12-13 04:20:19.503 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.9 KiB/s wr, 91 op/s
Dec 13 04:20:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1790817240' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1790817240' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:19 compute-0 nova_compute[243704]: 2025-12-13 04:20:19.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1898699550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1898699550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:20 compute-0 ceph-mon[75071]: pgmap v1455: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 66 KiB/s rd, 5.9 KiB/s wr, 91 op/s
Dec 13 04:20:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1898699550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1898699550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634767969' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/634767969' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.1 KiB/s wr, 79 op/s
Dec 13 04:20:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/634767969' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/634767969' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:22 compute-0 nova_compute[243704]: 2025-12-13 04:20:22.734 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:22 compute-0 ceph-mon[75071]: pgmap v1456: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 57 KiB/s rd, 5.1 KiB/s wr, 79 op/s
Dec 13 04:20:22 compute-0 nova_compute[243704]: 2025-12-13 04:20:22.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:22 compute-0 nova_compute[243704]: 2025-12-13 04:20:22.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:20:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557368522' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1557368522' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/73353492' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/73353492' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:23 compute-0 nova_compute[243704]: 2025-12-13 04:20:23.263 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Dec 13 04:20:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1557368522' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1557368522' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/73353492' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/73353492' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:24 compute-0 nova_compute[243704]: 2025-12-13 04:20:24.874 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:20:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330958733' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/330958733' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mon[75071]: pgmap v1457: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 4.9 KiB/s wr, 103 op/s
Dec 13 04:20:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/330958733' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/330958733' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929583128' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3929583128' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.6 KiB/s wr, 117 op/s
Dec 13 04:20:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:20:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4123794636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:20:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4123794636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3929583128' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3929583128' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:26 compute-0 ceph-mon[75071]: pgmap v1458: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 4.6 KiB/s wr, 117 op/s
Dec 13 04:20:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4123794636' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:20:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4123794636' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:20:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e366 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.5 KiB/s wr, 114 op/s
Dec 13 04:20:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e366 do_prune osdmap full prune enabled
Dec 13 04:20:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e367 e367: 3 total, 3 up, 3 in
Dec 13 04:20:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e367: 3 total, 3 up, 3 in
Dec 13 04:20:27 compute-0 nova_compute[243704]: 2025-12-13 04:20:27.735 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:27.971 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:20:28 compute-0 nova_compute[243704]: 2025-12-13 04:20:28.264 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:29 compute-0 ceph-mon[75071]: pgmap v1459: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 4.5 KiB/s wr, 114 op/s
Dec 13 04:20:29 compute-0 ceph-mon[75071]: osdmap e367: 3 total, 3 up, 3 in
Dec 13 04:20:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Dec 13 04:20:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1512992498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:31 compute-0 ceph-mon[75071]: pgmap v1461: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Dec 13 04:20:31 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1512992498' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Dec 13 04:20:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:32 compute-0 ceph-mon[75071]: pgmap v1462: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 92 KiB/s rd, 3.7 KiB/s wr, 126 op/s
Dec 13 04:20:32 compute-0 nova_compute[243704]: 2025-12-13 04:20:32.737 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:33 compute-0 nova_compute[243704]: 2025-12-13 04:20:33.266 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 148 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 6.0 MiB/s wr, 106 op/s
Dec 13 04:20:34 compute-0 ceph-mon[75071]: pgmap v1463: 305 pgs: 305 active+clean; 148 MiB data, 404 MiB used, 60 GiB / 60 GiB avail; 76 KiB/s rd, 6.0 MiB/s wr, 106 op/s
Dec 13 04:20:34 compute-0 podman[268940]: 2025-12-13 04:20:34.982844829 +0000 UTC m=+0.114175167 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec 13 04:20:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:35.095 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:20:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:35.096 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:20:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:20:35.096 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:20:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 236 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 15 MiB/s wr, 112 op/s
Dec 13 04:20:36 compute-0 ceph-mon[75071]: pgmap v1464: 305 pgs: 305 active+clean; 236 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 15 MiB/s wr, 112 op/s
Dec 13 04:20:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 236 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 15 MiB/s wr, 112 op/s
Dec 13 04:20:37 compute-0 nova_compute[243704]: 2025-12-13 04:20:37.739 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:38 compute-0 nova_compute[243704]: 2025-12-13 04:20:38.268 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:38 compute-0 ceph-mon[75071]: pgmap v1465: 305 pgs: 305 active+clean; 236 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 75 KiB/s rd, 15 MiB/s wr, 112 op/s
Dec 13 04:20:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 364 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 23 MiB/s wr, 115 op/s
Dec 13 04:20:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:20:40
Dec 13 04:20:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:20:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:20:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'backups', 'volumes', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', 'images']
Dec 13 04:20:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:20:41 compute-0 ceph-mon[75071]: pgmap v1466: 305 pgs: 305 active+clean; 364 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 23 MiB/s wr, 115 op/s
Dec 13 04:20:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 364 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 23 MiB/s wr, 76 op/s
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:20:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e367 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:42 compute-0 nova_compute[243704]: 2025-12-13 04:20:42.741 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:20:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:20:43 compute-0 ceph-mon[75071]: pgmap v1467: 305 pgs: 305 active+clean; 364 MiB data, 620 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 23 MiB/s wr, 76 op/s
Dec 13 04:20:43 compute-0 nova_compute[243704]: 2025-12-13 04:20:43.271 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 520 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 36 MiB/s wr, 80 op/s
Dec 13 04:20:45 compute-0 ceph-mon[75071]: pgmap v1468: 305 pgs: 305 active+clean; 520 MiB data, 744 MiB used, 59 GiB / 60 GiB avail; 47 KiB/s rd, 36 MiB/s wr, 80 op/s
Dec 13 04:20:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 840 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 58 MiB/s wr, 157 op/s
Dec 13 04:20:45 compute-0 podman[268967]: 2025-12-13 04:20:45.923317464 +0000 UTC m=+0.065549438 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:20:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215999554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:46 compute-0 ceph-mon[75071]: pgmap v1469: 305 pgs: 305 active+clean; 840 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 58 MiB/s wr, 157 op/s
Dec 13 04:20:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3215999554' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e367 do_prune osdmap full prune enabled
Dec 13 04:20:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e368 e368: 3 total, 3 up, 3 in
Dec 13 04:20:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e368: 3 total, 3 up, 3 in
Dec 13 04:20:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 840 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 60 MiB/s wr, 136 op/s
Dec 13 04:20:47 compute-0 nova_compute[243704]: 2025-12-13 04:20:47.743 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:48 compute-0 nova_compute[243704]: 2025-12-13 04:20:48.273 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 75 MiB/s wr, 209 op/s
Dec 13 04:20:49 compute-0 podman[268986]: 2025-12-13 04:20:49.97265679 +0000 UTC m=+0.117281041 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true)
Dec 13 04:20:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e368 do_prune osdmap full prune enabled
Dec 13 04:20:50 compute-0 ceph-mon[75071]: osdmap e368: 3 total, 3 up, 3 in
Dec 13 04:20:50 compute-0 ceph-mon[75071]: pgmap v1471: 305 pgs: 305 active+clean; 840 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 60 MiB/s wr, 136 op/s
Dec 13 04:20:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e369 e369: 3 total, 3 up, 3 in
Dec 13 04:20:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e369: 3 total, 3 up, 3 in
Dec 13 04:20:51 compute-0 ceph-mon[75071]: pgmap v1472: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 125 KiB/s rd, 75 MiB/s wr, 209 op/s
Dec 13 04:20:51 compute-0 ceph-mon[75071]: osdmap e369: 3 total, 3 up, 3 in
Dec 13 04:20:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 74 MiB/s wr, 256 op/s
Dec 13 04:20:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:20:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1098257563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e369 do_prune osdmap full prune enabled
Dec 13 04:20:52 compute-0 ceph-mon[75071]: pgmap v1474: 305 pgs: 305 active+clean; 1.1 GiB data, 1.3 GiB used, 59 GiB / 60 GiB avail; 156 KiB/s rd, 74 MiB/s wr, 256 op/s
Dec 13 04:20:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1098257563' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:20:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e370 e370: 3 total, 3 up, 3 in
Dec 13 04:20:52 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e370: 3 total, 3 up, 3 in
Dec 13 04:20:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 1.8039361072283686e-06 of space, bias 1.0, pg target 0.0005411808321685106 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0003593781035246231 of space, bias 1.0, pg target 0.10781343105738693 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.016672231291408084 of space, bias 1.0, pg target 5.001669387422425 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666055234558936 of space, bias 1.0, pg target 0.1966486294194886 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.136823627968478e-06 of space, bias 4.0, pg target 0.001341451881002804 quantized to 16 (current 16)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011255555284235201 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012381110812658724 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:20:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015007407045646937 quantized to 32 (current 32)
Dec 13 04:20:52 compute-0 nova_compute[243704]: 2025-12-13 04:20:52.745 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:52 compute-0 ovn_controller[145204]: 2025-12-13T04:20:52Z|00186|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 04:20:53 compute-0 nova_compute[243704]: 2025-12-13 04:20:53.276 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:53 compute-0 ceph-mon[75071]: osdmap e370: 3 total, 3 up, 3 in
Dec 13 04:20:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 820 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 51 MiB/s wr, 200 op/s
Dec 13 04:20:54 compute-0 ceph-mon[75071]: pgmap v1476: 305 pgs: 305 active+clean; 820 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 123 KiB/s rd, 51 MiB/s wr, 200 op/s
Dec 13 04:20:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 228 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 51 MiB/s wr, 242 op/s
Dec 13 04:20:56 compute-0 ceph-mon[75071]: pgmap v1477: 305 pgs: 305 active+clean; 228 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 136 KiB/s rd, 51 MiB/s wr, 242 op/s
Dec 13 04:20:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e370 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:20:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e370 do_prune osdmap full prune enabled
Dec 13 04:20:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e371 e371: 3 total, 3 up, 3 in
Dec 13 04:20:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e371: 3 total, 3 up, 3 in
Dec 13 04:20:57 compute-0 nova_compute[243704]: 2025-12-13 04:20:57.747 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 228 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 19 MiB/s wr, 125 op/s
Dec 13 04:20:58 compute-0 nova_compute[243704]: 2025-12-13 04:20:58.278 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:20:58 compute-0 ceph-mon[75071]: osdmap e371: 3 total, 3 up, 3 in
Dec 13 04:20:58 compute-0 ceph-mon[75071]: pgmap v1479: 305 pgs: 305 active+clean; 228 MiB data, 492 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 19 MiB/s wr, 125 op/s
Dec 13 04:20:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 808 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 90 MiB/s wr, 296 op/s
Dec 13 04:21:00 compute-0 ceph-mon[75071]: pgmap v1480: 305 pgs: 305 active+clean; 808 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 90 MiB/s wr, 296 op/s
Dec 13 04:21:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 808 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 MiB/s wr, 249 op/s
Dec 13 04:21:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e371 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:02 compute-0 nova_compute[243704]: 2025-12-13 04:21:02.750 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e371 do_prune osdmap full prune enabled
Dec 13 04:21:02 compute-0 ceph-mon[75071]: pgmap v1481: 305 pgs: 305 active+clean; 808 MiB data, 972 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 76 MiB/s wr, 249 op/s
Dec 13 04:21:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e372 e372: 3 total, 3 up, 3 in
Dec 13 04:21:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e372: 3 total, 3 up, 3 in
Dec 13 04:21:03 compute-0 nova_compute[243704]: 2025-12-13 04:21:03.280 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 965 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 90 MiB/s wr, 227 op/s
Dec 13 04:21:03 compute-0 ceph-mon[75071]: osdmap e372: 3 total, 3 up, 3 in
Dec 13 04:21:04 compute-0 ceph-mon[75071]: pgmap v1483: 305 pgs: 305 active+clean; 965 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 90 MiB/s wr, 227 op/s
Dec 13 04:21:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 109 MiB/s wr, 324 op/s
Dec 13 04:21:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2679230408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:06 compute-0 podman[269009]: 2025-12-13 04:21:06.130877988 +0000 UTC m=+0.133429148 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:21:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e372 do_prune osdmap full prune enabled
Dec 13 04:21:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e373 e373: 3 total, 3 up, 3 in
Dec 13 04:21:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e373: 3 total, 3 up, 3 in
Dec 13 04:21:07 compute-0 ceph-mon[75071]: pgmap v1484: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 109 MiB/s wr, 324 op/s
Dec 13 04:21:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2679230408' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e373 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e373 do_prune osdmap full prune enabled
Dec 13 04:21:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e374 e374: 3 total, 3 up, 3 in
Dec 13 04:21:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e374: 3 total, 3 up, 3 in
Dec 13 04:21:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 54 MiB/s wr, 211 op/s
Dec 13 04:21:07 compute-0 nova_compute[243704]: 2025-12-13 04:21:07.752 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:08 compute-0 ceph-mon[75071]: osdmap e373: 3 total, 3 up, 3 in
Dec 13 04:21:08 compute-0 ceph-mon[75071]: osdmap e374: 3 total, 3 up, 3 in
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.065 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.065 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.077 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.155 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.156 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.167 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.168 243708 INFO nova.compute.claims [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.262 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.291 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/330900982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.837 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.575s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.845 243708 DEBUG nova.compute.provider_tree [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.861 243708 DEBUG nova.scheduler.client.report [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.882 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.726s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.883 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.925 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.926 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.946 243708 INFO nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:21:08 compute-0 nova_compute[243704]: 2025-12-13 04:21:08.964 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:21:09 compute-0 nova_compute[243704]: 2025-12-13 04:21:09.435 243708 DEBUG nova.policy [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95b4d334bdca4149b6fe3499375d46e6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75b261e8b1c44ab8b079f57244a812c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:21:09 compute-0 nova_compute[243704]: 2025-12-13 04:21:09.539 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:21:09 compute-0 nova_compute[243704]: 2025-12-13 04:21:09.541 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:21:09 compute-0 nova_compute[243704]: 2025-12-13 04:21:09.541 243708 INFO nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Creating image(s)
Dec 13 04:21:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 48 MiB/s wr, 225 op/s
Dec 13 04:21:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3156708454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:10 compute-0 ceph-mon[75071]: pgmap v1487: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 129 KiB/s rd, 54 MiB/s wr, 211 op/s
Dec 13 04:21:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/330900982' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.630 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.655 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.684 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.688 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.715 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Successfully created port: 39966274-17ef-4b21-91cd-f57096630a08 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.756 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.757 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.757 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.758 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.781 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:10 compute-0 nova_compute[243704]: 2025-12-13 04:21:10.786 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:10 compute-0 sudo[269133]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:21:10 compute-0 sudo[269133]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:10 compute-0 sudo[269133]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:10 compute-0 sudo[269177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:21:10 compute-0 sudo[269177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.083 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.157 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] resizing rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.248 243708 DEBUG nova.objects.instance [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.265 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.266 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Ensure instance console log exists: /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.266 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.267 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.267 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:11 compute-0 sudo[269177]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e374 do_prune osdmap full prune enabled
Dec 13 04:21:11 compute-0 ceph-mon[75071]: pgmap v1488: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 141 KiB/s rd, 48 MiB/s wr, 225 op/s
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3156708454' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:21:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e375 e375: 3 total, 3 up, 3 in
Dec 13 04:21:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e375: 3 total, 3 up, 3 in
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.639 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Successfully updated port: 39966274-17ef-4b21-91cd-f57096630a08 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:21:11 compute-0 sudo[269306]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:21:11 compute-0 sudo[269306]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:11 compute-0 sudo[269306]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.656 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.656 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquired lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.656 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:21:11 compute-0 sudo[269331]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:21:11 compute-0 sudo[269331]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.722 243708 DEBUG nova.compute.manager [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-changed-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.722 243708 DEBUG nova.compute.manager [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Refreshing instance network info cache due to event network-changed-39966274-17ef-4b21-91cd-f57096630a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.722 243708 DEBUG oslo_concurrency.lockutils [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 46 KiB/s wr, 44 op/s
Dec 13 04:21:11 compute-0 nova_compute[243704]: 2025-12-13 04:21:11.793 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.028002145 +0000 UTC m=+0.047838588 container create 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:21:12 compute-0 systemd[1]: Started libpod-conmon-0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154.scope.
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.00898823 +0000 UTC m=+0.028824673 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.128595643 +0000 UTC m=+0.148432086 container init 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.137279688 +0000 UTC m=+0.157116131 container start 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.141545224 +0000 UTC m=+0.161381667 container attach 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:21:12 compute-0 modest_morse[269385]: 167 167
Dec 13 04:21:12 compute-0 systemd[1]: libpod-0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154.scope: Deactivated successfully.
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.14731664 +0000 UTC m=+0.167153083 container died 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecadaf5358c168df0020df7f3526d4d8cea83c2613a48a436cf51c3e1d5cc12f-merged.mount: Deactivated successfully.
Dec 13 04:21:12 compute-0 podman[269369]: 2025-12-13 04:21:12.197584663 +0000 UTC m=+0.217421106 container remove 0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_morse, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:12 compute-0 systemd[1]: libpod-conmon-0dbfec1622395d121e95921c2ce6f337abdd628b7fa15cc4ec72af891940c154.scope: Deactivated successfully.
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:12 compute-0 podman[269408]: 2025-12-13 04:21:12.371418207 +0000 UTC m=+0.046897573 container create 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:21:12 compute-0 systemd[1]: Started libpod-conmon-9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59.scope.
Dec 13 04:21:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e375 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:12 compute-0 podman[269408]: 2025-12-13 04:21:12.353466271 +0000 UTC m=+0.028945657 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:12 compute-0 podman[269408]: 2025-12-13 04:21:12.451207661 +0000 UTC m=+0.126687047 container init 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:21:12 compute-0 podman[269408]: 2025-12-13 04:21:12.461584001 +0000 UTC m=+0.137063367 container start 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:21:12 compute-0 podman[269408]: 2025-12-13 04:21:12.465394755 +0000 UTC m=+0.140874201 container attach 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.518 243708 DEBUG nova.network.neutron [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating instance_info_cache with network_info: [{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.536 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Releasing lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.537 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Instance network_info: |[{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.537 243708 DEBUG oslo_concurrency.lockutils [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.537 243708 DEBUG nova.network.neutron [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Refreshing network info cache for port 39966274-17ef-4b21-91cd-f57096630a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.540 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Start _get_guest_xml network_info=[{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.545 243708 WARNING nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.552 243708 DEBUG nova.virt.libvirt.host [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.553 243708 DEBUG nova.virt.libvirt.host [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.556 243708 DEBUG nova.virt.libvirt.host [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.557 243708 DEBUG nova.virt.libvirt.host [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.557 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.558 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.558 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.558 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.558 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.559 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.559 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.559 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.559 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.559 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.560 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.560 243708 DEBUG nova.virt.hardware [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.563 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:12 compute-0 nova_compute[243704]: 2025-12-13 04:21:12.754 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:12 compute-0 vigorous_chatterjee[269424]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:21:12 compute-0 vigorous_chatterjee[269424]: --> All data devices are unavailable
Dec 13 04:21:13 compute-0 systemd[1]: libpod-9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59.scope: Deactivated successfully.
Dec 13 04:21:13 compute-0 podman[269408]: 2025-12-13 04:21:13.008641665 +0000 UTC m=+0.684121061 container died 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e375 do_prune osdmap full prune enabled
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.294 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:13 compute-0 ceph-mon[75071]: osdmap e375: 3 total, 3 up, 3 in
Dec 13 04:21:13 compute-0 ceph-mon[75071]: pgmap v1490: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 46 KiB/s wr, 44 op/s
Dec 13 04:21:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e376 e376: 3 total, 3 up, 3 in
Dec 13 04:21:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e376: 3 total, 3 up, 3 in
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.566 243708 DEBUG nova.network.neutron [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updated VIF entry in instance network info cache for port 39966274-17ef-4b21-91cd-f57096630a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.566 243708 DEBUG nova.network.neutron [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating instance_info_cache with network_info: [{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.583 243708 DEBUG oslo_concurrency.lockutils [req-485a3d2c-895a-4901-98dd-33d7ad46ed0e req-b40173ba-00c4-4eb0-a878-3ab60f5b36d7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-cf2e097ecd18b021eb474870e4632a24d8f41f9094e51191c437c08d3d9b0111-merged.mount: Deactivated successfully.
Dec 13 04:21:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3773335065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.622 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/213444164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:13 compute-0 podman[269408]: 2025-12-13 04:21:13.634530586 +0000 UTC m=+1.310009962 container remove 9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.653 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:13 compute-0 nova_compute[243704]: 2025-12-13 04:21:13.658 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:13 compute-0 sudo[269331]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:13 compute-0 systemd[1]: libpod-conmon-9a10d38212222f522c5be57c7495525a6bc2e3807e700d6b1b2391c22ecd7e59.scope: Deactivated successfully.
Dec 13 04:21:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.4 MiB/s wr, 72 op/s
Dec 13 04:21:13 compute-0 sudo[269497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:21:13 compute-0 sudo[269497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:13 compute-0 sudo[269497]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:13 compute-0 sudo[269522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:21:13 compute-0 sudo[269522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.137320328 +0000 UTC m=+0.062896696 container create 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:14 compute-0 systemd[1]: Started libpod-conmon-8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7.scope.
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.107983093 +0000 UTC m=+0.033559521 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/454402031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.229701793 +0000 UTC m=+0.155278191 container init 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.244773062 +0000 UTC m=+0.170349450 container start 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.249243433 +0000 UTC m=+0.174819821 container attach 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:21:14 compute-0 romantic_neumann[269595]: 167 167
Dec 13 04:21:14 compute-0 systemd[1]: libpod-8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7.scope: Deactivated successfully.
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.252607095 +0000 UTC m=+0.178183453 container died 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.259 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.601s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.266 243708 DEBUG nova.virt.libvirt.vif [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1073473030',display_name='tempest-VolumesSnapshotTestJSON-instance-1073473030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1073473030',id=20,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPnTXWA/nRRsLHWrwXAHms3VRz6l/lbjc4hB16QPBAiHUqhG7ID6+zyLAzbkNvKYrpOjixr8f39czXdR92AR1H4axBtfRdy5Zuwva9dLrUra+4xXkGSoq6ZlYmYAsOcrQ==',key_name='tempest-keypair-1808277459',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-f8p6ns36',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=82d113ec-d32a-4dd6-b8f4-bab622ea377f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.268 243708 DEBUG nova.network.os_vif_util [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.270 243708 DEBUG nova.network.os_vif_util [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.273 243708 DEBUG nova.objects.instance [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-9a7480c9937920c940fd562935bcdad93d0b0ab8ada73774d265d18aff4fda23-merged.mount: Deactivated successfully.
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.291 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <uuid>82d113ec-d32a-4dd6-b8f4-bab622ea377f</uuid>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <name>instance-00000014</name>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1073473030</nova:name>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:21:12</nova:creationTime>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:user uuid="95b4d334bdca4149b6fe3499375d46e6">tempest-VolumesSnapshotTestJSON-524347860-project-member</nova:user>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:project uuid="75b261e8b1c44ab8b079f57244a812c7">tempest-VolumesSnapshotTestJSON-524347860</nova:project>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <nova:port uuid="39966274-17ef-4b21-91cd-f57096630a08">
Dec 13 04:21:14 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <system>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="serial">82d113ec-d32a-4dd6-b8f4-bab622ea377f</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="uuid">82d113ec-d32a-4dd6-b8f4-bab622ea377f</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </system>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <os>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </os>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <features>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </features>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk">
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config">
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:14 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:be:bc:47"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <target dev="tap39966274-17"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/console.log" append="off"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <video>
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </video>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:21:14 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:21:14 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:21:14 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:21:14 compute-0 nova_compute[243704]: </domain>
Dec 13 04:21:14 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.294 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Preparing to wait for external event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.294 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.295 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.295 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.296 243708 DEBUG nova.virt.libvirt.vif [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1073473030',display_name='tempest-VolumesSnapshotTestJSON-instance-1073473030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1073473030',id=20,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPnTXWA/nRRsLHWrwXAHms3VRz6l/lbjc4hB16QPBAiHUqhG7ID6+zyLAzbkNvKYrpOjixr8f39czXdR92AR1H4axBtfRdy5Zuwva9dLrUra+4xXkGSoq6ZlYmYAsOcrQ==',key_name='tempest-keypair-1808277459',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-f8p6ns36',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:08Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=82d113ec-d32a-4dd6-b8f4-bab622ea377f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.296 243708 DEBUG nova.network.os_vif_util [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.296 243708 DEBUG nova.network.os_vif_util [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.297 243708 DEBUG os_vif [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.297 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.298 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.298 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.303 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.303 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap39966274-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.304 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap39966274-17, col_values=(('external_ids', {'iface-id': '39966274-17ef-4b21-91cd-f57096630a08', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:be:bc:47', 'vm-uuid': '82d113ec-d32a-4dd6-b8f4-bab622ea377f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:14 compute-0 podman[269579]: 2025-12-13 04:21:14.304979594 +0000 UTC m=+0.230555932 container remove 8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_neumann, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Dec 13 04:21:14 compute-0 NetworkManager[48899]: <info>  [1765599674.3079] manager: (tap39966274-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/102)
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.309 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.317 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.318 243708 INFO os_vif [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17')
Dec 13 04:21:14 compute-0 systemd[1]: libpod-conmon-8e2fee19e84915e4006b0d52e052e3e783a11f543139f16180747a2b55a1fde7.scope: Deactivated successfully.
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.367 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.367 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.367 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No VIF found with MAC fa:16:3e:be:bc:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.368 243708 INFO nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Using config drive
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.388 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.482698813 +0000 UTC m=+0.044744854 container create 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:21:14 compute-0 systemd[1]: Started libpod-conmon-66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4.scope.
Dec 13 04:21:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e376 do_prune osdmap full prune enabled
Dec 13 04:21:14 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2f913de0f66aa6d472bccc5a1dbad759c8d6180fb1d467efbdcc23d5c71fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2f913de0f66aa6d472bccc5a1dbad759c8d6180fb1d467efbdcc23d5c71fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2f913de0f66aa6d472bccc5a1dbad759c8d6180fb1d467efbdcc23d5c71fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ca2f913de0f66aa6d472bccc5a1dbad759c8d6180fb1d467efbdcc23d5c71fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e377 e377: 3 total, 3 up, 3 in
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.46674067 +0000 UTC m=+0.028786721 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:14 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e377: 3 total, 3 up, 3 in
Dec 13 04:21:14 compute-0 ceph-mon[75071]: osdmap e376: 3 total, 3 up, 3 in
Dec 13 04:21:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3773335065' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/213444164' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:14 compute-0 ceph-mon[75071]: pgmap v1492: 305 pgs: 305 active+clean; 1.1 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 1.4 MiB/s wr, 72 op/s
Dec 13 04:21:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/454402031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.579416885 +0000 UTC m=+0.141462996 container init 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.591511214 +0000 UTC m=+0.153557245 container start 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.5943249 +0000 UTC m=+0.156371051 container attach 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]: {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     "0": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "devices": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "/dev/loop3"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             ],
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_name": "ceph_lv0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_size": "21470642176",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "name": "ceph_lv0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "tags": {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_name": "ceph",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.crush_device_class": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.encrypted": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.objectstore": "bluestore",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_id": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.vdo": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.with_tpm": "0"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             },
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "vg_name": "ceph_vg0"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         }
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     ],
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     "1": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "devices": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "/dev/loop4"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             ],
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_name": "ceph_lv1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_size": "21470642176",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "name": "ceph_lv1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "tags": {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_name": "ceph",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.crush_device_class": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.encrypted": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.objectstore": "bluestore",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_id": "1",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.vdo": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.with_tpm": "0"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             },
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "vg_name": "ceph_vg1"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         }
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     ],
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     "2": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "devices": [
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "/dev/loop5"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             ],
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_name": "ceph_lv2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_size": "21470642176",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "name": "ceph_lv2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "tags": {
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.cluster_name": "ceph",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.crush_device_class": "",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.encrypted": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.objectstore": "bluestore",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osd_id": "2",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.vdo": "0",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:                 "ceph.with_tpm": "0"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             },
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "type": "block",
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:             "vg_name": "ceph_vg2"
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:         }
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]:     ]
Dec 13 04:21:14 compute-0 fervent_wozniak[269657]: }
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.904 243708 INFO nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Creating config drive at /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config
Dec 13 04:21:14 compute-0 nova_compute[243704]: 2025-12-13 04:21:14.914 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7szes6f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:14 compute-0 systemd[1]: libpod-66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4.scope: Deactivated successfully.
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.939697105 +0000 UTC m=+0.501743146 container died 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:21:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ca2f913de0f66aa6d472bccc5a1dbad759c8d6180fb1d467efbdcc23d5c71fb-merged.mount: Deactivated successfully.
Dec 13 04:21:14 compute-0 podman[269641]: 2025-12-13 04:21:14.993058781 +0000 UTC m=+0.555104822 container remove 66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:15 compute-0 systemd[1]: libpod-conmon-66e7217c6c03c8b3dbde5358fbfe445f12926bd212ef2d40ce69b5cce8ae5ed4.scope: Deactivated successfully.
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.064 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe7szes6f" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:15 compute-0 sudo[269522]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.094 243708 DEBUG nova.storage.rbd_utils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.102 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:15 compute-0 sudo[269697]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:21:15 compute-0 sudo[269697]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:15 compute-0 sudo[269697]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:15 compute-0 sudo[269734]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:21:15 compute-0 sudo[269734]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.271 243708 DEBUG oslo_concurrency.processutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config 82d113ec-d32a-4dd6-b8f4-bab622ea377f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.273 243708 INFO nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Deleting local config drive /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f/disk.config because it was imported into RBD.
Dec 13 04:21:15 compute-0 kernel: tap39966274-17: entered promiscuous mode
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.3517] manager: (tap39966274-17): new Tun device (/org/freedesktop/NetworkManager/Devices/103)
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.351 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 ovn_controller[145204]: 2025-12-13T04:21:15Z|00187|binding|INFO|Claiming lport 39966274-17ef-4b21-91cd-f57096630a08 for this chassis.
Dec 13 04:21:15 compute-0 ovn_controller[145204]: 2025-12-13T04:21:15Z|00188|binding|INFO|39966274-17ef-4b21-91cd-f57096630a08: Claiming fa:16:3e:be:bc:47 10.100.0.5
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.367 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:bc:47 10.100.0.5'], port_security=['fa:16:3e:be:bc:47 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '82d113ec-d32a-4dd6-b8f4-bab622ea377f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75b261e8b1c44ab8b079f57244a812c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f05bcc9b-944b-48d9-ae53-ba48ad133a97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d2e886-04ee-44a8-8e42-fd2f33ff96d6, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=39966274-17ef-4b21-91cd-f57096630a08) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.369 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 39966274-17ef-4b21-91cd-f57096630a08 in datapath 0f93b436-b78f-4a08-8363-5ff70f1f85b9 bound to our chassis
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.371 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.387 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7c0a9df5-ebf7-40b2-9d39-396d492a8bf0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.389 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f93b436-b1 in ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.392 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f93b436-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.392 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[90ff5caf-e2b7-43dd-84da-da2fde215db7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.393 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3811b53e-0bbb-4d10-bf3b-51d150324692]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 systemd-udevd[269782]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.415 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[446fb82c-a97d-46b4-b24c-469185ff3ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 systemd-machined[206767]: New machine qemu-20-instance-00000014.
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.4274] device (tap39966274-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.4284] device (tap39966274-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.434 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 systemd[1]: Started Virtual Machine qemu-20-instance-00000014.
Dec 13 04:21:15 compute-0 ovn_controller[145204]: 2025-12-13T04:21:15Z|00189|binding|INFO|Setting lport 39966274-17ef-4b21-91cd-f57096630a08 ovn-installed in OVS
Dec 13 04:21:15 compute-0 ovn_controller[145204]: 2025-12-13T04:21:15Z|00190|binding|INFO|Setting lport 39966274-17ef-4b21-91cd-f57096630a08 up in Southbound
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.442 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.450 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d2548635-accc-4cca-a487-ec9942af69ce]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.495 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[01dab56f-47e5-4cc8-ab7d-9f054e795855]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.503 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc35cef-0cd1-4584-9d1a-3df591870923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.5054] manager: (tap0f93b436-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/104)
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.545 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b9578ea8-45db-4595-bdbe-50acf87d6841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.551 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[07dea6d9-e9ac-4dde-8f27-b7f2da93e801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ceph-mon[75071]: osdmap e377: 3 total, 3 up, 3 in
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.5806] device (tap0f93b436-b0): carrier: link connected
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.587 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[e22f2385-1fff-4376-a03f-bc2469b85564]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.589792141 +0000 UTC m=+0.045284778 container create aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.609 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c3ff97e9-ac17-4c2a-aebc-c3b6ffd45db0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f93b436-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:a1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441425, 'reachable_time': 33338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269838, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.630 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3cc693-b00e-4bc7-8e5b-0a90e2b08d91]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:a1e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441425, 'tstamp': 441425}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 269844, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 systemd[1]: Started libpod-conmon-aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972.scope.
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.651 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[58f6b483-85f2-4e99-ae75-1a0afdf48388]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f93b436-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:a1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 64], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441425, 'reachable_time': 33338, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 269845, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.571850935 +0000 UTC m=+0.027343602 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.688618261 +0000 UTC m=+0.144110928 container init aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.691 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bae550db-cbcd-4c0f-865e-7bafcdf80d0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.702468206 +0000 UTC m=+0.157960843 container start aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.706667461 +0000 UTC m=+0.162160098 container attach aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:21:15 compute-0 wizardly_goldstine[269846]: 167 167
Dec 13 04:21:15 compute-0 systemd[1]: libpod-aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972.scope: Deactivated successfully.
Dec 13 04:21:15 compute-0 conmon[269846]: conmon aea5ee2e928cdce65949 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972.scope/container/memory.events
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.712588171 +0000 UTC m=+0.168080808 container died aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-708ab05fab6465a643bb86736cb8c733d57f4bb17965a9553c586eaca58c04bc-merged.mount: Deactivated successfully.
Dec 13 04:21:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Dec 13 04:21:15 compute-0 podman[269820]: 2025-12-13 04:21:15.763383388 +0000 UTC m=+0.218876025 container remove aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:21:15 compute-0 systemd[1]: libpod-conmon-aea5ee2e928cdce6594939ff20fff8f4ff706fcb039c852a30362f6c2efdd972.scope: Deactivated successfully.
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.783 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[deb913ee-6337-4e31-ae11-0b5021e566f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.784 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f93b436-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.785 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.785 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f93b436-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.787 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 kernel: tap0f93b436-b0: entered promiscuous mode
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.789 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.790 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f93b436-b0, col_values=(('external_ids', {'iface-id': '33b3b6f8-467a-4e08-8d35-798a9ec0adcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:15 compute-0 NetworkManager[48899]: <info>  [1765599675.7916] manager: (tap0f93b436-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.791 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 ovn_controller[145204]: 2025-12-13T04:21:15Z|00191|binding|INFO|Releasing lport 33b3b6f8-467a-4e08-8d35-798a9ec0adcc from this chassis (sb_readonly=0)
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.810 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.811 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.812 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[52dd1f7e-fa02-4642-a452-338f019cccfa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.813 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:21:15 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:15.814 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'env', 'PROCESS_TAG=haproxy-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f93b436-b78f-4a08-8363-5ff70f1f85b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.884 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.889 243708 DEBUG nova.compute.manager [req-971dc5e3-d959-4303-a56e-73d5d80fd1fe req-9b90d0aa-1af7-4c50-9b76-d5f8613f819b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.889 243708 DEBUG oslo_concurrency.lockutils [req-971dc5e3-d959-4303-a56e-73d5d80fd1fe req-9b90d0aa-1af7-4c50-9b76-d5f8613f819b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.889 243708 DEBUG oslo_concurrency.lockutils [req-971dc5e3-d959-4303-a56e-73d5d80fd1fe req-9b90d0aa-1af7-4c50-9b76-d5f8613f819b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.890 243708 DEBUG oslo_concurrency.lockutils [req-971dc5e3-d959-4303-a56e-73d5d80fd1fe req-9b90d0aa-1af7-4c50-9b76-d5f8613f819b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:15 compute-0 nova_compute[243704]: 2025-12-13 04:21:15.890 243708 DEBUG nova.compute.manager [req-971dc5e3-d959-4303-a56e-73d5d80fd1fe req-9b90d0aa-1af7-4c50-9b76-d5f8613f819b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Processing event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:21:15 compute-0 podman[269878]: 2025-12-13 04:21:15.97514921 +0000 UTC m=+0.050332225 container create bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:21:16 compute-0 systemd[1]: Started libpod-conmon-bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e.scope.
Dec 13 04:21:16 compute-0 podman[269878]: 2025-12-13 04:21:15.953668908 +0000 UTC m=+0.028851943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:21:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87602f61a52d7905fd716fa8e5a88afb6fc4f1090c26db616e0ed097f2507038/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87602f61a52d7905fd716fa8e5a88afb6fc4f1090c26db616e0ed097f2507038/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87602f61a52d7905fd716fa8e5a88afb6fc4f1090c26db616e0ed097f2507038/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87602f61a52d7905fd716fa8e5a88afb6fc4f1090c26db616e0ed097f2507038/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508471646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:16 compute-0 podman[269878]: 2025-12-13 04:21:16.094528857 +0000 UTC m=+0.169711892 container init bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:21:16 compute-0 podman[269878]: 2025-12-13 04:21:16.104008634 +0000 UTC m=+0.179191649 container start bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:21:16 compute-0 podman[269892]: 2025-12-13 04:21:16.107956631 +0000 UTC m=+0.090089753 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec 13 04:21:16 compute-0 podman[269878]: 2025-12-13 04:21:16.107139379 +0000 UTC m=+0.182322434 container attach bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.253 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599676.2526815, 82d113ec-d32a-4dd6-b8f4-bab622ea377f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.254 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] VM Started (Lifecycle Event)
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.257 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.264 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.269 243708 INFO nova.virt.libvirt.driver [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Instance spawned successfully.
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.269 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.273 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.278 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.290 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.291 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.291 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.292 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.292 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.293 243708 DEBUG nova.virt.libvirt.driver [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.297 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.298 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599676.2530649, 82d113ec-d32a-4dd6-b8f4-bab622ea377f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.298 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] VM Paused (Lifecycle Event)
Dec 13 04:21:16 compute-0 podman[269981]: 2025-12-13 04:21:16.30890983 +0000 UTC m=+0.078197081 container create ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.328 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.333 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599676.2661934, 82d113ec-d32a-4dd6-b8f4-bab622ea377f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.334 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] VM Resumed (Lifecycle Event)
Dec 13 04:21:16 compute-0 systemd[1]: Started libpod-conmon-ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8.scope.
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.353 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.358 243708 INFO nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Took 6.82 seconds to spawn the instance on the hypervisor.
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.359 243708 DEBUG nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.360 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:21:16 compute-0 podman[269981]: 2025-12-13 04:21:16.270659223 +0000 UTC m=+0.039946504 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.389 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:21:16 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57072a8edc9069627fcb9b66ed786cc5dc0e26f0b7d7714fdb792578bd97b939/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.454 243708 INFO nova.compute.manager [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Took 8.34 seconds to build instance.
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.470 243708 DEBUG oslo_concurrency.lockutils [None req-f63ed3e5-aafb-448d-8980-909edf93e57f 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:16 compute-0 podman[269981]: 2025-12-13 04:21:16.470750878 +0000 UTC m=+0.240038129 container init ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:21:16 compute-0 podman[269981]: 2025-12-13 04:21:16.480914414 +0000 UTC m=+0.250201665 container start ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:21:16 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [NOTICE]   (270012) : New worker (270017) forked
Dec 13 04:21:16 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [NOTICE]   (270012) : Loading success.
Dec 13 04:21:16 compute-0 ceph-mon[75071]: pgmap v1494: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 3.5 MiB/s wr, 126 op/s
Dec 13 04:21:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/508471646' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.894 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.894 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.895 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.895 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:21:16 compute-0 nova_compute[243704]: 2025-12-13 04:21:16.895 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:16 compute-0 lvm[270082]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:21:16 compute-0 lvm[270082]: VG ceph_vg0 finished
Dec 13 04:21:16 compute-0 lvm[270084]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:21:16 compute-0 lvm[270084]: VG ceph_vg1 finished
Dec 13 04:21:16 compute-0 lvm[270085]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:21:16 compute-0 lvm[270085]: VG ceph_vg2 finished
Dec 13 04:21:17 compute-0 lvm[270086]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:21:17 compute-0 lvm[270086]: VG ceph_vg1 finished
Dec 13 04:21:17 compute-0 sweet_tu[269922]: {}
Dec 13 04:21:17 compute-0 systemd[1]: libpod-bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e.scope: Deactivated successfully.
Dec 13 04:21:17 compute-0 systemd[1]: libpod-bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e.scope: Consumed 1.602s CPU time.
Dec 13 04:21:17 compute-0 podman[270108]: 2025-12-13 04:21:17.161491457 +0000 UTC m=+0.032636926 container died bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:21:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-87602f61a52d7905fd716fa8e5a88afb6fc4f1090c26db616e0ed097f2507038-merged.mount: Deactivated successfully.
Dec 13 04:21:17 compute-0 podman[270108]: 2025-12-13 04:21:17.27446333 +0000 UTC m=+0.145608769 container remove bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:21:17 compute-0 systemd[1]: libpod-conmon-bbc6c4ff2bb1e05e1d9d26e358d23a80ea72acbbf97c14d85669cb30d08de41e.scope: Deactivated successfully.
Dec 13 04:21:17 compute-0 sudo[269734]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:21:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:21:17 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:17 compute-0 sudo[270121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:21:17 compute-0 sudo[270121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:21:17 compute-0 sudo[270121]: pam_unix(sudo:session): session closed for user root
Dec 13 04:21:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434568559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.581 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.686s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.669 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.670 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:21:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 MiB/s wr, 123 op/s
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.758 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.898 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.900 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4286MB free_disk=59.96742162667215GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.900 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.900 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.963 243708 DEBUG nova.compute.manager [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.964 243708 DEBUG oslo_concurrency.lockutils [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.964 243708 DEBUG oslo_concurrency.lockutils [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.964 243708 DEBUG oslo_concurrency.lockutils [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.964 243708 DEBUG nova.compute.manager [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] No waiting events found dispatching network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.964 243708 WARNING nova.compute.manager [req-3cfbfd74-feaa-437f-a55f-1de7c140dcde req-9c8d34b4-d8f5-4dfb-8020-de22bb449c73 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received unexpected event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 for instance with vm_state active and task_state None.
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.987 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.988 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:21:17 compute-0 nova_compute[243704]: 2025-12-13 04:21:17.988 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.026 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:18 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:21:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2434568559' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:18 compute-0 ceph-mon[75071]: pgmap v1495: 305 pgs: 305 active+clean; 1.2 GiB data, 1.4 GiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 3.5 MiB/s wr, 123 op/s
Dec 13 04:21:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1290638165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.682 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.688 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.707 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.741 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:21:18 compute-0 nova_compute[243704]: 2025-12-13 04:21:18.741 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.308 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1290638165' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.742 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.742 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.743 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:21:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 45 MiB/s wr, 336 op/s
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.938 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.939 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.944 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.944 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.944 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.945 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:19 compute-0 nova_compute[243704]: 2025-12-13 04:21:19.971 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.067 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.067 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.073 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.074 243708 INFO nova.compute.claims [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.190 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.554 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:20 compute-0 NetworkManager[48899]: <info>  [1765599680.5553] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Dec 13 04:21:20 compute-0 NetworkManager[48899]: <info>  [1765599680.5567] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/107)
Dec 13 04:21:20 compute-0 ceph-mon[75071]: pgmap v1496: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 45 MiB/s wr, 336 op/s
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.759 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:20 compute-0 ovn_controller[145204]: 2025-12-13T04:21:20Z|00192|binding|INFO|Releasing lport 33b3b6f8-467a-4e08-8d35-798a9ec0adcc from this chassis (sb_readonly=0)
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.782 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4029194890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.854 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.664s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.865 243708 DEBUG nova.compute.provider_tree [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.882 243708 DEBUG nova.scheduler.client.report [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.902 243708 DEBUG nova.compute.manager [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-changed-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.903 243708 DEBUG nova.compute.manager [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Refreshing instance network info cache due to event network-changed-39966274-17ef-4b21-91cd-f57096630a08. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.905 243708 DEBUG oslo_concurrency.lockutils [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.908 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.909 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:21:20 compute-0 podman[270193]: 2025-12-13 04:21:20.939987899 +0000 UTC m=+0.080584356 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.972 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:21:20 compute-0 nova_compute[243704]: 2025-12-13 04:21:20.972 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.043 243708 INFO nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.063 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.113 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating instance_info_cache with network_info: [{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.150 243708 DEBUG nova.policy [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '439e16bdacdd484cbdfe5b2ff762e327', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ad8ea73576b4cf9aad3a876effca617', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.155 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.155 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.156 243708 DEBUG oslo_concurrency.lockutils [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.156 243708 DEBUG nova.network.neutron [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Refreshing network info cache for port 39966274-17ef-4b21-91cd-f57096630a08 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.158 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.158 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.158 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.159 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.159 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.184 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.185 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.186 243708 INFO nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Creating image(s)
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.223 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.253 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.281 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.285 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.413 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.414 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.415 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.416 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.442 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:21 compute-0 nova_compute[243704]: 2025-12-13 04:21:21.447 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 44 MiB/s wr, 327 op/s
Dec 13 04:21:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4029194890' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:22 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:22.480 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:21:22 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:22.481 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:21:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.485 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.632 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Successfully created port: 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.792 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.822 243708 DEBUG nova.network.neutron [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updated VIF entry in instance network info cache for port 39966274-17ef-4b21-91cd-f57096630a08. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.823 243708 DEBUG nova.network.neutron [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating instance_info_cache with network_info: [{"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:22 compute-0 nova_compute[243704]: 2025-12-13 04:21:22.844 243708 DEBUG oslo_concurrency.lockutils [req-985cc154-8489-4d37-b32f-480edba0566c req-b0bc0635-348b-4e6f-beb8-18a01f9eed12 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-82d113ec-d32a-4dd6-b8f4-bab622ea377f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.168 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.721s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.241 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] resizing rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:21:23 compute-0 ceph-mon[75071]: pgmap v1497: 305 pgs: 305 active+clean; 1.5 GiB data, 1.7 GiB used, 58 GiB / 60 GiB avail; 2.9 MiB/s rd, 44 MiB/s wr, 327 op/s
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.746 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Successfully updated port: 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:21:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 51 MiB/s wr, 266 op/s
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.771 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.772 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquired lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.773 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.861 243708 DEBUG nova.compute.manager [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-changed-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.862 243708 DEBUG nova.compute.manager [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Refreshing instance network info cache due to event network-changed-7ab6a504-5168-444a-8e2d-d3cfb84bbe35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.862 243708 DEBUG oslo_concurrency.lockutils [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.869 243708 DEBUG nova.objects.instance [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'migration_context' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.880 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.881 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Ensure instance console log exists: /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.882 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.882 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:23 compute-0 nova_compute[243704]: 2025-12-13 04:21:23.882 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:24 compute-0 nova_compute[243704]: 2025-12-13 04:21:24.347 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:24 compute-0 nova_compute[243704]: 2025-12-13 04:21:24.350 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:21:24 compute-0 nova_compute[243704]: 2025-12-13 04:21:24.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:21:24 compute-0 nova_compute[243704]: 2025-12-13 04:21:24.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:21:24 compute-0 ceph-mon[75071]: pgmap v1498: 305 pgs: 305 active+clean; 1.7 GiB data, 1.9 GiB used, 58 GiB / 60 GiB avail; 2.4 MiB/s rd, 51 MiB/s wr, 266 op/s
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.152 243708 DEBUG nova.network.neutron [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating instance_info_cache with network_info: [{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.173 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Releasing lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.173 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Instance network_info: |[{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.173 243708 DEBUG oslo_concurrency.lockutils [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.173 243708 DEBUG nova.network.neutron [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Refreshing network info cache for port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.176 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Start _get_guest_xml network_info=[{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.182 243708 WARNING nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.190 243708 DEBUG nova.virt.libvirt.host [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.191 243708 DEBUG nova.virt.libvirt.host [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.203 243708 DEBUG nova.virt.libvirt.host [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.204 243708 DEBUG nova.virt.libvirt.host [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.205 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.205 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.205 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.206 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.206 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.206 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.206 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.206 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.207 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.207 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.207 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.207 243708 DEBUG nova.virt.hardware [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.210 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3422360097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 2.2 MiB/s rd, 65 MiB/s wr, 271 op/s
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.775 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.802 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:25 compute-0 nova_compute[243704]: 2025-12-13 04:21:25.807 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3422360097' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.222 243708 DEBUG nova.network.neutron [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updated VIF entry in instance network info cache for port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.224 243708 DEBUG nova.network.neutron [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating instance_info_cache with network_info: [{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.237 243708 DEBUG oslo_concurrency.lockutils [req-6374502f-54b5-4276-9fe6-144c4b77f18d req-94c1e142-0a87-4f9a-a861-373789ddd187 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/99861446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.371 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.374 243708 DEBUG nova.virt.libvirt.vif [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1946557945',display_name='tempest-TestEncryptedCinderVolumes-server-1946557945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1946557945',id=21,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDO7SwNyWikMosTS35n5vqyRQITWepB5C2NwwuFSUchyBEe9nlIjiUO8yORLAN0grWozQJ2L9NBxakLxbVlRFLObQy0bBXmx0nBvUiDPIPhHiffZWEm7lZhQW+gG+qScFw==',key_name='tempest-keypair-2106823029',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-e090j3iu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=76079c07-6caa-4f82-8285-1ce2d2f6c0a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.377 243708 DEBUG nova.network.os_vif_util [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.378 243708 DEBUG nova.network.os_vif_util [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.380 243708 DEBUG nova.objects.instance [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'pci_devices' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.393 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <uuid>76079c07-6caa-4f82-8285-1ce2d2f6c0a8</uuid>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <name>instance-00000015</name>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1946557945</nova:name>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:21:25</nova:creationTime>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:user uuid="439e16bdacdd484cbdfe5b2ff762e327">tempest-TestEncryptedCinderVolumes-1691115809-project-member</nova:user>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:project uuid="3ad8ea73576b4cf9aad3a876effca617">tempest-TestEncryptedCinderVolumes-1691115809</nova:project>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <nova:port uuid="7ab6a504-5168-444a-8e2d-d3cfb84bbe35">
Dec 13 04:21:26 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <system>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="serial">76079c07-6caa-4f82-8285-1ce2d2f6c0a8</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="uuid">76079c07-6caa-4f82-8285-1ce2d2f6c0a8</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </system>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <os>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </os>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <features>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </features>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk">
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config">
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:26 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:d1:11:05"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <target dev="tap7ab6a504-51"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/console.log" append="off"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <video>
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </video>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:21:26 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:21:26 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:21:26 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:21:26 compute-0 nova_compute[243704]: </domain>
Dec 13 04:21:26 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.401 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Preparing to wait for external event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.402 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.402 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.402 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.403 243708 DEBUG nova.virt.libvirt.vif [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1946557945',display_name='tempest-TestEncryptedCinderVolumes-server-1946557945',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1946557945',id=21,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDO7SwNyWikMosTS35n5vqyRQITWepB5C2NwwuFSUchyBEe9nlIjiUO8yORLAN0grWozQJ2L9NBxakLxbVlRFLObQy0bBXmx0nBvUiDPIPhHiffZWEm7lZhQW+gG+qScFw==',key_name='tempest-keypair-2106823029',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-e090j3iu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:21Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=76079c07-6caa-4f82-8285-1ce2d2f6c0a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.403 243708 DEBUG nova.network.os_vif_util [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.404 243708 DEBUG nova.network.os_vif_util [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.404 243708 DEBUG os_vif [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.410 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.411 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.412 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.415 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.416 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ab6a504-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.417 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7ab6a504-51, col_values=(('external_ids', {'iface-id': '7ab6a504-5168-444a-8e2d-d3cfb84bbe35', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d1:11:05', 'vm-uuid': '76079c07-6caa-4f82-8285-1ce2d2f6c0a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.419 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.421 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:21:26 compute-0 NetworkManager[48899]: <info>  [1765599686.4237] manager: (tap7ab6a504-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/108)
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.430 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.431 243708 INFO os_vif [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51')
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.483 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.484 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.484 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No VIF found with MAC fa:16:3e:d1:11:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.485 243708 INFO nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Using config drive
Dec 13 04:21:26 compute-0 nova_compute[243704]: 2025-12-13 04:21:26.509 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:27 compute-0 ceph-mon[75071]: pgmap v1499: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 2.2 MiB/s rd, 65 MiB/s wr, 271 op/s
Dec 13 04:21:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/99861446' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.422 243708 INFO nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Creating config drive at /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.429 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq5w5ab0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e377 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.562 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbq5w5ab0" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.592 243708 DEBUG nova.storage.rbd_utils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.612 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 MiB/s wr, 253 op/s
Dec 13 04:21:27 compute-0 nova_compute[243704]: 2025-12-13 04:21:27.797 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:28.484 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 84 MiB/s wr, 304 op/s
Dec 13 04:21:29 compute-0 ceph-mon[75071]: pgmap v1500: 305 pgs: 305 active+clean; 1.9 GiB data, 2.1 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 61 MiB/s wr, 253 op/s
Dec 13 04:21:30 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.352 243708 DEBUG oslo_concurrency.processutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config 76079c07-6caa-4f82-8285-1ce2d2f6c0a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.740s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.352 243708 INFO nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Deleting local config drive /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8/disk.config because it was imported into RBD.
Dec 13 04:21:30 compute-0 kernel: tap7ab6a504-51: entered promiscuous mode
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.4090] manager: (tap7ab6a504-51): new Tun device (/org/freedesktop/NetworkManager/Devices/109)
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.412 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 ovn_controller[145204]: 2025-12-13T04:21:30Z|00193|binding|INFO|Claiming lport 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 for this chassis.
Dec 13 04:21:30 compute-0 ovn_controller[145204]: 2025-12-13T04:21:30Z|00194|binding|INFO|7ab6a504-5168-444a-8e2d-d3cfb84bbe35: Claiming fa:16:3e:d1:11:05 10.100.0.14
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.420 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:11:05 10.100.0.14'], port_security=['fa:16:3e:d1:11:05 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '76079c07-6caa-4f82-8285-1ce2d2f6c0a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '2', 'neutron:security_group_ids': '839929bc-ac81-4da1-84c1-1de9fc403e53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=7ab6a504-5168-444a-8e2d-d3cfb84bbe35) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.422 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 bound to our chassis
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.424 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:21:30 compute-0 ovn_controller[145204]: 2025-12-13T04:21:30Z|00195|binding|INFO|Setting lport 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 ovn-installed in OVS
Dec 13 04:21:30 compute-0 ovn_controller[145204]: 2025-12-13T04:21:30Z|00196|binding|INFO|Setting lport 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 up in Southbound
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.435 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.437 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.444 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[36b14d03-f4b7-41f0-8b7b-4ef5e71117d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.445 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87c0a2c3-51 in ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.448 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87c0a2c3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.448 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[be6c9b52-a4f9-4d07-944e-6ff28be68d42]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.449 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6cc01b62-43bf-481f-84cb-ad9ab8d9960b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 systemd-udevd[270518]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:21:30 compute-0 systemd-machined[206767]: New machine qemu-21-instance-00000015.
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.471 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[957fa5af-a97a-4df9-ac29-519aeacba224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.4786] device (tap7ab6a504-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.4797] device (tap7ab6a504-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:21:30 compute-0 systemd[1]: Started Virtual Machine qemu-21-instance-00000015.
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.498 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dca041c7-c3f3-46c2-b756-e6d01610f2b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.539 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[44acead2-9848-4774-9091-fbd9e32a44a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.5486] manager: (tap87c0a2c3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/110)
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.548 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fabd9ef5-f9cf-4b5a-856c-5e0643d9364a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.584 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b3d6b806-f65e-4853-8011-e7636873af2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.649 243708 DEBUG nova.compute.manager [req-be27859e-2f27-4972-9a1b-5f3c0a40869a req-100c34e2-8dcb-4072-9e2c-fb1b59ca2d8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.650 243708 DEBUG oslo_concurrency.lockutils [req-be27859e-2f27-4972-9a1b-5f3c0a40869a req-100c34e2-8dcb-4072-9e2c-fb1b59ca2d8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.650 243708 DEBUG oslo_concurrency.lockutils [req-be27859e-2f27-4972-9a1b-5f3c0a40869a req-100c34e2-8dcb-4072-9e2c-fb1b59ca2d8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.651 243708 DEBUG oslo_concurrency.lockutils [req-be27859e-2f27-4972-9a1b-5f3c0a40869a req-100c34e2-8dcb-4072-9e2c-fb1b59ca2d8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.651 243708 DEBUG nova.compute.manager [req-be27859e-2f27-4972-9a1b-5f3c0a40869a req-100c34e2-8dcb-4072-9e2c-fb1b59ca2d8c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Processing event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.649 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[027d0a3a-10bb-4abf-974c-9bc8fbf0c84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.6751] device (tap87c0a2c3-50): carrier: link connected
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.679 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b0df899c-b0e3-4915-bb76-6bab9f94e162]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.701 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d22f174d-4dc3-439a-969f-e180f0f336f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442935, 'reachable_time': 33378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270551, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.716 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6304c5b2-e55f-44dc-817a-88ce17656edb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:9abe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 442935, 'tstamp': 442935}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270552, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.732 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ea86a0c7-7d46-400c-a673-878b55ff4238]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 66], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442935, 'reachable_time': 33378, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270553, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.765 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[718fac2f-1445-462f-8bfb-f2b5229c75c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.827 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[74c2f2c5-815a-49d9-aef9-bd1c65414eab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.829 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.829 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.830 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87c0a2c3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:30 compute-0 NetworkManager[48899]: <info>  [1765599690.8321] manager: (tap87c0a2c3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.831 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 kernel: tap87c0a2c3-50: entered promiscuous mode
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.835 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87c0a2c3-50, col_values=(('external_ids', {'iface-id': '4a1239ec-278e-40d8-aa2f-d801913596a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.836 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 ovn_controller[145204]: 2025-12-13T04:21:30Z|00197|binding|INFO|Releasing lport 4a1239ec-278e-40d8-aa2f-d801913596a6 from this chassis (sb_readonly=0)
Dec 13 04:21:30 compute-0 nova_compute[243704]: 2025-12-13 04:21:30.852 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.853 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.854 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[79d6033e-e43d-4927-abab-38beb93f049a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.854 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:21:30 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:30.855 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'env', 'PROCESS_TAG=haproxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:21:30 compute-0 ceph-mon[75071]: pgmap v1501: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 2.0 MiB/s rd, 84 MiB/s wr, 304 op/s
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.042 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.045 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599691.0421042, 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.045 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] VM Started (Lifecycle Event)
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.048 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.053 243708 INFO nova.virt.libvirt.driver [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Instance spawned successfully.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.053 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.070 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.076 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.080 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.081 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.081 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.081 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.082 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.082 243708 DEBUG nova.virt.libvirt.driver [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.108 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.109 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599691.0424232, 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.109 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] VM Paused (Lifecycle Event)
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.127 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.133 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599691.0496953, 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.133 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] VM Resumed (Lifecycle Event)
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.153 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.156 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.162 243708 INFO nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Took 9.98 seconds to spawn the instance on the hypervisor.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.162 243708 DEBUG nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.187 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.227 243708 INFO nova.compute.manager [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Took 11.18 seconds to build instance.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.246 243708 DEBUG oslo_concurrency.lockutils [None req-c405c373-b257-4bc7-a275-173db4b1dfa8 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:31 compute-0 podman[270626]: 2025-12-13 04:21:31.274196038 +0000 UTC m=+0.056050121 container create 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:21:31 compute-0 systemd[1]: Started libpod-conmon-10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953.scope.
Dec 13 04:21:31 compute-0 podman[270626]: 2025-12-13 04:21:31.241503071 +0000 UTC m=+0.023357174 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:21:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:21:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a8e96d0526a3ec425b103806d7b827251defcf05a73f72676b5abaed4440470/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:21:31 compute-0 podman[270626]: 2025-12-13 04:21:31.36354416 +0000 UTC m=+0.145398273 container init 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:21:31 compute-0 podman[270626]: 2025-12-13 04:21:31.370778396 +0000 UTC m=+0.152632479 container start 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:21:31 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [NOTICE]   (270646) : New worker (270648) forked
Dec 13 04:21:31 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [NOTICE]   (270646) : Loading success.
Dec 13 04:21:31 compute-0 nova_compute[243704]: 2025-12-13 04:21:31.420 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e377 do_prune osdmap full prune enabled
Dec 13 04:21:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e378 e378: 3 total, 3 up, 3 in
Dec 13 04:21:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e378: 3 total, 3 up, 3 in
Dec 13 04:21:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 98 KiB/s rd, 67 MiB/s wr, 172 op/s
Dec 13 04:21:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e378 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.693 243708 DEBUG nova.compute.manager [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.693 243708 DEBUG oslo_concurrency.lockutils [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.694 243708 DEBUG oslo_concurrency.lockutils [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.694 243708 DEBUG oslo_concurrency.lockutils [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.694 243708 DEBUG nova.compute.manager [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] No waiting events found dispatching network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.694 243708 WARNING nova.compute.manager [req-db9671bb-ef83-4313-b65e-eed8e2115a56 req-dfe44161-5570-470d-83e7-04a67354c1a3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received unexpected event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 for instance with vm_state active and task_state None.
Dec 13 04:21:32 compute-0 ovn_controller[145204]: 2025-12-13T04:21:32Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:be:bc:47 10.100.0.5
Dec 13 04:21:32 compute-0 ovn_controller[145204]: 2025-12-13T04:21:32Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:be:bc:47 10.100.0.5
Dec 13 04:21:32 compute-0 ceph-mon[75071]: osdmap e378: 3 total, 3 up, 3 in
Dec 13 04:21:32 compute-0 ceph-mon[75071]: pgmap v1503: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 98 KiB/s rd, 67 MiB/s wr, 172 op/s
Dec 13 04:21:32 compute-0 nova_compute[243704]: 2025-12-13 04:21:32.802 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 54 MiB/s wr, 220 op/s
Dec 13 04:21:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e378 do_prune osdmap full prune enabled
Dec 13 04:21:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e379 e379: 3 total, 3 up, 3 in
Dec 13 04:21:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e379: 3 total, 3 up, 3 in
Dec 13 04:21:34 compute-0 nova_compute[243704]: 2025-12-13 04:21:34.765 243708 DEBUG nova.compute.manager [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-changed-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:34 compute-0 nova_compute[243704]: 2025-12-13 04:21:34.765 243708 DEBUG nova.compute.manager [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Refreshing instance network info cache due to event network-changed-7ab6a504-5168-444a-8e2d-d3cfb84bbe35. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:21:34 compute-0 nova_compute[243704]: 2025-12-13 04:21:34.766 243708 DEBUG oslo_concurrency.lockutils [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:34 compute-0 nova_compute[243704]: 2025-12-13 04:21:34.766 243708 DEBUG oslo_concurrency.lockutils [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:34 compute-0 nova_compute[243704]: 2025-12-13 04:21:34.766 243708 DEBUG nova.network.neutron [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Refreshing network info cache for port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:21:34 compute-0 ceph-mon[75071]: pgmap v1504: 305 pgs: 305 active+clean; 2.2 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 1.2 MiB/s rd, 54 MiB/s wr, 220 op/s
Dec 13 04:21:34 compute-0 ceph-mon[75071]: osdmap e379: 3 total, 3 up, 3 in
Dec 13 04:21:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:35.097 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:35.098 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:21:35.098 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 43 MiB/s wr, 333 op/s
Dec 13 04:21:36 compute-0 nova_compute[243704]: 2025-12-13 04:21:36.114 243708 DEBUG nova.network.neutron [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updated VIF entry in instance network info cache for port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:21:36 compute-0 nova_compute[243704]: 2025-12-13 04:21:36.115 243708 DEBUG nova.network.neutron [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating instance_info_cache with network_info: [{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:36 compute-0 nova_compute[243704]: 2025-12-13 04:21:36.144 243708 DEBUG oslo_concurrency.lockutils [req-4e98786a-c590-4e97-8617-4498d0b61714 req-1cab8046-5def-44b6-85d1-483764fcc2df 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:36 compute-0 nova_compute[243704]: 2025-12-13 04:21:36.457 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:36 compute-0 ceph-mon[75071]: pgmap v1506: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 3.4 MiB/s rd, 43 MiB/s wr, 333 op/s
Dec 13 04:21:36 compute-0 podman[270657]: 2025-12-13 04:21:36.950364054 +0000 UTC m=+0.093688972 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Dec 13 04:21:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e379 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e379 do_prune osdmap full prune enabled
Dec 13 04:21:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e380 e380: 3 total, 3 up, 3 in
Dec 13 04:21:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e380: 3 total, 3 up, 3 in
Dec 13 04:21:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 340 op/s
Dec 13 04:21:37 compute-0 nova_compute[243704]: 2025-12-13 04:21:37.838 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:38 compute-0 ceph-mon[75071]: osdmap e380: 3 total, 3 up, 3 in
Dec 13 04:21:38 compute-0 ceph-mon[75071]: pgmap v1508: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.5 MiB/s rd, 11 MiB/s wr, 340 op/s
Dec 13 04:21:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.0 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Dec 13 04:21:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:21:40
Dec 13 04:21:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:21:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:21:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'images', 'backups', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr']
Dec 13 04:21:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:21:40 compute-0 ceph-mon[75071]: pgmap v1509: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 6.0 MiB/s rd, 8.2 MiB/s wr, 287 op/s
Dec 13 04:21:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2163992623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:41 compute-0 nova_compute[243704]: 2025-12-13 04:21:41.521 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 206 op/s
Dec 13 04:21:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e380 do_prune osdmap full prune enabled
Dec 13 04:21:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2163992623' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e381 e381: 3 total, 3 up, 3 in
Dec 13 04:21:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e381: 3 total, 3 up, 3 in
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:21:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e381 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e381 do_prune osdmap full prune enabled
Dec 13 04:21:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e382 e382: 3 total, 3 up, 3 in
Dec 13 04:21:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e382: 3 total, 3 up, 3 in
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:21:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:21:42 compute-0 nova_compute[243704]: 2025-12-13 04:21:42.886 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:43 compute-0 ceph-mon[75071]: pgmap v1510: 305 pgs: 305 active+clean; 2.3 GiB data, 2.4 GiB used, 58 GiB / 60 GiB avail; 4.6 MiB/s rd, 4.7 MiB/s wr, 206 op/s
Dec 13 04:21:43 compute-0 ceph-mon[75071]: osdmap e381: 3 total, 3 up, 3 in
Dec 13 04:21:43 compute-0 ceph-mon[75071]: osdmap e382: 3 total, 3 up, 3 in
Dec 13 04:21:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 91 op/s
Dec 13 04:21:45 compute-0 ceph-mon[75071]: pgmap v1513: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.9 MiB/s wr, 91 op/s
Dec 13 04:21:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:21:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526386092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:21:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:21:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1526386092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:21:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.3 MiB/s wr, 112 op/s
Dec 13 04:21:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1526386092' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:21:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1526386092' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:21:46 compute-0 nova_compute[243704]: 2025-12-13 04:21:46.525 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:47 compute-0 podman[270684]: 2025-12-13 04:21:47.023594022 +0000 UTC m=+0.157839710 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec 13 04:21:47 compute-0 ovn_controller[145204]: 2025-12-13T04:21:47Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d1:11:05 10.100.0.14
Dec 13 04:21:47 compute-0 ovn_controller[145204]: 2025-12-13T04:21:47Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d1:11:05 10.100.0.14
Dec 13 04:21:47 compute-0 ceph-mon[75071]: pgmap v1514: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 5.3 MiB/s rd, 5.3 MiB/s wr, 112 op/s
Dec 13 04:21:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 81 op/s
Dec 13 04:21:47 compute-0 nova_compute[243704]: 2025-12-13 04:21:47.888 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:48 compute-0 ceph-mon[75071]: pgmap v1515: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.3 MiB/s wr, 81 op/s
Dec 13 04:21:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 176 op/s
Dec 13 04:21:51 compute-0 ceph-mon[75071]: pgmap v1516: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 3.2 MiB/s rd, 8.5 MiB/s wr, 176 op/s
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.304 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.305 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.326 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.410 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.411 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.422 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.423 243708 INFO nova.compute.claims [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.564 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:51 compute-0 nova_compute[243704]: 2025-12-13 04:21:51.582 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.9 MiB/s wr, 142 op/s
Dec 13 04:21:51 compute-0 podman[270724]: 2025-12-13 04:21:51.933177433 +0000 UTC m=+0.082501978 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:21:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:21:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1770589770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.192 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.609s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.200 243708 DEBUG nova.compute.provider_tree [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.215 243708 DEBUG nova.scheduler.client.report [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.236 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.825s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.237 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.286 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.287 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.306 243708 INFO nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.340 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.389 243708 INFO nova.virt.block_device [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Booting with volume e07eb99d-7eb7-4de2-8fa7-89833d3e3f15 at /dev/vdb
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.440 243708 DEBUG nova.policy [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '550f7240611f4009aa1ef70200760184', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '4d091687ce954cb1b60b66f0e250a2a6', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:21:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e382 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.506 243708 DEBUG os_brick.utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.512 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001518994606650496 of space, bias 1.0, pg target 0.4556983819951488 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03439617268762963 of space, bias 1.0, pg target 10.318851806288889 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.000348293386167355 of space, bias 1.0, pg target 0.10100508198853295 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665578304250282 of space, bias 1.0, pg target 0.1933017708232582 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0104339911601285e-06 of space, bias 4.0, pg target 0.001172103429745749 quantized to 16 (current 16)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:21:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.533 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.534 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[14d1b97d-eabb-4447-8b17-85f43d11ba9c]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.536 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.549 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.549 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9b251e-c3b6-42da-9fd3-36b02d9785bd]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.552 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.567 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.567 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[ecad42df-e16b-4a56-8b0b-bfc0be645df7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.569 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae48f38-f8a9-47f8-9409-5ad6c210c776]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.570 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.604 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "nvme version" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.609 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.609 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.610 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.611 243708 DEBUG os_brick.utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] <== get_connector_properties: return (104ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.611 243708 DEBUG nova.virt.block_device [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updating existing volume attachment record: 40440148-8d38-422d-92cd-9f4b26bb4e8d _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:21:52 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.929 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:52.999 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Successfully created port: 81069290-53e4-4b72-85b9-c14104457590 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:21:53 compute-0 ceph-mon[75071]: pgmap v1517: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.9 MiB/s wr, 142 op/s
Dec 13 04:21:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1770589770' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.189 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.189 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.202 243708 DEBUG nova.objects.instance [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.216 243708 INFO nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Ignoring supplied device name: /dev/vdb
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.225 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1180339073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.428 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.429 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.429 243708 INFO nova.compute.manager [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Attaching volume 2e5dffee-496b-489e-a090-9b3ef09a90d6 to /dev/vdb
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.597 243708 DEBUG os_brick.utils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.599 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.620 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.621 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a1d3ff60-a80d-4d5c-865a-3e902120200d]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.623 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.638 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.016s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.639 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b622620f-e9ba-4b20-add5-61e7321b9964]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.641 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.653 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.653 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[f999b0e5-416b-43b6-a123-66db13b4cb72]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.659 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.660 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.661 243708 INFO nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Creating image(s)
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.688 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.714 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.739 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.743 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.655 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[0f450319-9985-4d11-b78b-455087cc670c]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.776 243708 DEBUG oslo_concurrency.processutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 125 op/s
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.804 243708 DEBUG oslo_concurrency.processutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "nvme version" returned: 0 in 0.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.807 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.807 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.808 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.808 243708 DEBUG os_brick.utils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] <== get_connector_properties: return (210ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.809 243708 DEBUG nova.virt.block_device [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating existing volume attachment record: 5271eff2-0fc5-425b-b5c2-dd03cdf67d89 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.835 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.836 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.837 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.838 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.878 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.885 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 a1553556-dd0c-4271-b7de-2c5739155591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:53 compute-0 nova_compute[243704]: 2025-12-13 04:21:53.971 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Successfully updated port: 81069290-53e4-4b72-85b9-c14104457590 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.091 243708 DEBUG nova.compute.manager [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-changed-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.091 243708 DEBUG nova.compute.manager [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Refreshing instance network info cache due to event network-changed-81069290-53e4-4b72-85b9-c14104457590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.092 243708 DEBUG oslo_concurrency.lockutils [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.092 243708 DEBUG oslo_concurrency.lockutils [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.093 243708 DEBUG nova.network.neutron [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Refreshing network info cache for port 81069290-53e4-4b72-85b9-c14104457590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.393 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.436 243708 DEBUG nova.network.neutron [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:21:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3350672117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1180339073' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.704 243708 DEBUG nova.objects.instance [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.723 243708 DEBUG nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Attempting to attach volume 2e5dffee-496b-489e-a090-9b3ef09a90d6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.728 243708 DEBUG nova.virt.libvirt.guest [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:21:54 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-2e5dffee-496b-489e-a090-9b3ef09a90d6">
Dec 13 04:21:54 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   </source>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:21:54 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:21:54 compute-0 nova_compute[243704]:   <serial>2e5dffee-496b-489e-a090-9b3ef09a90d6</serial>
Dec 13 04:21:54 compute-0 nova_compute[243704]: </disk>
Dec 13 04:21:54 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.766 243708 DEBUG nova.network.neutron [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.782 243708 DEBUG oslo_concurrency.lockutils [req-91fdd145-71ee-4f74-88d6-66338e7f7815 req-6416acf1-36c1-4c4d-83b1-56481df5db7a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.783 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquired lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.784 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.885 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 a1553556-dd0c-4271-b7de-2c5739155591_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.000s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.915 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.958 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] resizing rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.992 243708 DEBUG nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.993 243708 DEBUG nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.994 243708 DEBUG nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:54 compute-0 nova_compute[243704]: 2025-12-13 04:21:54.994 243708 DEBUG nova.virt.libvirt.driver [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No VIF found with MAC fa:16:3e:be:bc:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.038 243708 DEBUG nova.objects.instance [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lazy-loading 'migration_context' on Instance uuid a1553556-dd0c-4271-b7de-2c5739155591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.048 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.049 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Ensure instance console log exists: /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.049 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.049 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.050 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:55 compute-0 nova_compute[243704]: 2025-12-13 04:21:55.507 243708 DEBUG oslo_concurrency.lockutils [None req-2ed40cde-fd2c-45c8-95f3-5ed3772512f5 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Dec 13 04:21:56 compute-0 ceph-mon[75071]: pgmap v1518: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.3 MiB/s rd, 6.1 MiB/s wr, 125 op/s
Dec 13 04:21:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3350672117' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:56 compute-0 nova_compute[243704]: 2025-12-13 04:21:56.566 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.061 243708 DEBUG nova.network.neutron [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updating instance_info_cache with network_info: [{"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.078 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Releasing lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.079 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance network_info: |[{"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.082 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Start _get_guest_xml network_info=[{"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vdb': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [{'boot_index': -1, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-e07eb99d-7eb7-4de2-8fa7-89833d3e3f15', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'e07eb99d-7eb7-4de2-8fa7-89833d3e3f15', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': False, 'cacheable': False}, 'status': 'reserved', 'instance': 'a1553556-dd0c-4271-b7de-2c5739155591', 'attached_at': '', 'detached_at': '', 'volume_id': 'e07eb99d-7eb7-4de2-8fa7-89833d3e3f15', 'serial': 'e07eb99d-7eb7-4de2-8fa7-89833d3e3f15'}, 'disk_bus': 'virtio', 'attachment_id': '40440148-8d38-422d-92cd-9f4b26bb4e8d', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vdb', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.086 243708 WARNING nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.091 243708 DEBUG nova.virt.libvirt.host [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.091 243708 DEBUG nova.virt.libvirt.host [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.094 243708 DEBUG nova.virt.libvirt.host [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.094 243708 DEBUG nova.virt.libvirt.host [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.095 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.095 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.095 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.096 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.096 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.096 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.096 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.096 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.097 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.097 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.097 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.097 243708 DEBUG nova.virt.hardware [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.102 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e382 do_prune osdmap full prune enabled
Dec 13 04:21:57 compute-0 ceph-mon[75071]: pgmap v1519: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Dec 13 04:21:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e383 e383: 3 total, 3 up, 3 in
Dec 13 04:21:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e383: 3 total, 3 up, 3 in
Dec 13 04:21:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e383 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:21:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2974377435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.682 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.711 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.716 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 423 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 13 04:21:57 compute-0 nova_compute[243704]: 2025-12-13 04:21:57.931 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:21:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1395723083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.296 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:58 compute-0 ceph-mon[75071]: osdmap e383: 3 total, 3 up, 3 in
Dec 13 04:21:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2974377435' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1395723083' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.322 243708 DEBUG nova.virt.libvirt.vif [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1970772568',display_name='tempest-instance-1970772568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1970772568',id=22,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiGEwyTdooaKbAVWN0g8c6leJ40yeXokRq3QuvDXZrKu8VH+DLR9rsuVErwL3KQWIu2edoerqCIXrzmh+jrhKzrYWQVf0rbAXR5C9EAL56ICtpX4jAUqZo1fgPnzL6n5g==',key_name='tempest-keypair-1140258528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d091687ce954cb1b60b66f0e250a2a6',ramdisk_id='',reservation_id='r-0ext81cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-500075976',owner_user_name='tempest-VolumesBackupsTest-500075976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='550f7240611f4009aa1ef70200760184',uuid=a1553556-dd0c-4271-b7de-2c5739155591,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.323 243708 DEBUG nova.network.os_vif_util [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converting VIF {"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.324 243708 DEBUG nova.network.os_vif_util [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.325 243708 DEBUG nova.objects.instance [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1553556-dd0c-4271-b7de-2c5739155591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.336 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <uuid>a1553556-dd0c-4271-b7de-2c5739155591</uuid>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <name>instance-00000016</name>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:name>tempest-instance-1970772568</nova:name>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:21:57</nova:creationTime>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:user uuid="550f7240611f4009aa1ef70200760184">tempest-VolumesBackupsTest-500075976-project-member</nova:user>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:project uuid="4d091687ce954cb1b60b66f0e250a2a6">tempest-VolumesBackupsTest-500075976</nova:project>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <nova:port uuid="81069290-53e4-4b72-85b9-c14104457590">
Dec 13 04:21:58 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <system>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="serial">a1553556-dd0c-4271-b7de-2c5739155591</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="uuid">a1553556-dd0c-4271-b7de-2c5739155591</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </system>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <os>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </os>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <features>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </features>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/a1553556-dd0c-4271-b7de-2c5739155591_disk">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/a1553556-dd0c-4271-b7de-2c5739155591_disk.config">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-e07eb99d-7eb7-4de2-8fa7-89833d3e3f15">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </source>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:21:58 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <target dev="vdb" bus="virtio"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <serial>e07eb99d-7eb7-4de2-8fa7-89833d3e3f15</serial>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:3c:a1:58"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <target dev="tap81069290-53"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/console.log" append="off"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <video>
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </video>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:21:58 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:21:58 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:21:58 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:21:58 compute-0 nova_compute[243704]: </domain>
Dec 13 04:21:58 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.339 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Preparing to wait for external event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.339 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.339 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.340 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.340 243708 DEBUG nova.virt.libvirt.vif [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-instance-1970772568',display_name='tempest-instance-1970772568',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1970772568',id=22,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiGEwyTdooaKbAVWN0g8c6leJ40yeXokRq3QuvDXZrKu8VH+DLR9rsuVErwL3KQWIu2edoerqCIXrzmh+jrhKzrYWQVf0rbAXR5C9EAL56ICtpX4jAUqZo1fgPnzL6n5g==',key_name='tempest-keypair-1140258528',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='4d091687ce954cb1b60b66f0e250a2a6',ramdisk_id='',reservation_id='r-0ext81cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesBackupsTest-500075976',owner_user_name='tempest-VolumesBackupsTest-500075976-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:21:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='550f7240611f4009aa1ef70200760184',uuid=a1553556-dd0c-4271-b7de-2c5739155591,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.341 243708 DEBUG nova.network.os_vif_util [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converting VIF {"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.342 243708 DEBUG nova.network.os_vif_util [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.342 243708 DEBUG os_vif [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.343 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.344 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.344 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.348 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.349 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap81069290-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.349 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap81069290-53, col_values=(('external_ids', {'iface-id': '81069290-53e4-4b72-85b9-c14104457590', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3c:a1:58', 'vm-uuid': 'a1553556-dd0c-4271-b7de-2c5739155591'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.351 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:58 compute-0 NetworkManager[48899]: <info>  [1765599718.3524] manager: (tap81069290-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/112)
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.354 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.360 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.361 243708 INFO os_vif [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53')
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.412 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.413 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.413 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.413 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] No VIF found with MAC fa:16:3e:3c:a1:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.414 243708 INFO nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Using config drive
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.439 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.842 243708 INFO nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Creating config drive at /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.855 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputlz1b0p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:58 compute-0 nova_compute[243704]: 2025-12-13 04:21:58.995 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmputlz1b0p" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:21:59 compute-0 nova_compute[243704]: 2025-12-13 04:21:59.029 243708 DEBUG nova.storage.rbd_utils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] rbd image a1553556-dd0c-4271-b7de-2c5739155591_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:21:59 compute-0 nova_compute[243704]: 2025-12-13 04:21:59.035 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config a1553556-dd0c-4271-b7de-2c5739155591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:21:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e383 do_prune osdmap full prune enabled
Dec 13 04:21:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 13 04:21:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e384 e384: 3 total, 3 up, 3 in
Dec 13 04:22:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e384: 3 total, 3 up, 3 in
Dec 13 04:22:01 compute-0 ceph-mon[75071]: pgmap v1521: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 423 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.203 243708 DEBUG oslo_concurrency.processutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config a1553556-dd0c-4271-b7de-2c5739155591_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.168s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.203 243708 INFO nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Deleting local config drive /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591/disk.config because it was imported into RBD.
Dec 13 04:22:01 compute-0 kernel: tap81069290-53: entered promiscuous mode
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.2643] manager: (tap81069290-53): new Tun device (/org/freedesktop/NetworkManager/Devices/113)
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.265 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 ovn_controller[145204]: 2025-12-13T04:22:01Z|00198|binding|INFO|Claiming lport 81069290-53e4-4b72-85b9-c14104457590 for this chassis.
Dec 13 04:22:01 compute-0 ovn_controller[145204]: 2025-12-13T04:22:01Z|00199|binding|INFO|81069290-53e4-4b72-85b9-c14104457590: Claiming fa:16:3e:3c:a1:58 10.100.0.12
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.273 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:a1:58 10.100.0.12'], port_security=['fa:16:3e:3c:a1:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a1553556-dd0c-4271-b7de-2c5739155591', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d091687ce954cb1b60b66f0e250a2a6', 'neutron:revision_number': '2', 'neutron:security_group_ids': '01d1227f-16cd-4990-95d6-fa037ef961a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6f2fb8-2f33-4c9d-9392-7c4537f332df, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=81069290-53e4-4b72-85b9-c14104457590) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.274 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 81069290-53e4-4b72-85b9-c14104457590 in datapath 01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 bound to our chassis
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.276 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 01e9047f-f5cf-4bd5-a58c-1b5ed80cec97
Dec 13 04:22:01 compute-0 ovn_controller[145204]: 2025-12-13T04:22:01Z|00200|binding|INFO|Setting lport 81069290-53e4-4b72-85b9-c14104457590 ovn-installed in OVS
Dec 13 04:22:01 compute-0 ovn_controller[145204]: 2025-12-13T04:22:01Z|00201|binding|INFO|Setting lport 81069290-53e4-4b72-85b9-c14104457590 up in Southbound
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.284 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.290 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb3caf-3704-4453-a539-7cfb33f8aae8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.290 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap01e9047f-f1 in ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.294 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap01e9047f-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.294 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[80a68b97-5497-41e2-be81-cf756523b614]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.296 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e37e3e5b-db1f-4a8c-b608-95a200acd255]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 systemd-machined[206767]: New machine qemu-22-instance-00000016.
Dec 13 04:22:01 compute-0 systemd[1]: Started Virtual Machine qemu-22-instance-00000016.
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.312 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[3a619986-50a7-4949-9cb4-23350e0779fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 systemd-udevd[271084]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.3380] device (tap81069290-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.338 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[45ac5feb-77f0-42bf-9f2b-41d24141bee7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.3408] device (tap81069290-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.375 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[65ebe83f-20b4-4e35-8cee-37825df5ed4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.383 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3c7edd-f61b-4a5c-9e95-a93f5b57bc89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.3844] manager: (tap01e9047f-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/114)
Dec 13 04:22:01 compute-0 systemd-udevd[271088]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.422 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[333f39e8-1790-4a24-8eba-2114aa5421bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.425 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[fec0455e-bf58-4dd1-a82d-3263840a93db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.4509] device (tap01e9047f-f0): carrier: link connected
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.455 243708 DEBUG nova.compute.manager [req-2fef525f-748a-4f00-8298-d1fc793f3ccc req-bd014094-3be1-4d08-b7d3-4927ca272930 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.455 243708 DEBUG oslo_concurrency.lockutils [req-2fef525f-748a-4f00-8298-d1fc793f3ccc req-bd014094-3be1-4d08-b7d3-4927ca272930 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.456 243708 DEBUG oslo_concurrency.lockutils [req-2fef525f-748a-4f00-8298-d1fc793f3ccc req-bd014094-3be1-4d08-b7d3-4927ca272930 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.457 243708 DEBUG oslo_concurrency.lockutils [req-2fef525f-748a-4f00-8298-d1fc793f3ccc req-bd014094-3be1-4d08-b7d3-4927ca272930 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.457 243708 DEBUG nova.compute.manager [req-2fef525f-748a-4f00-8298-d1fc793f3ccc req-bd014094-3be1-4d08-b7d3-4927ca272930 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Processing event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.460 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[bd65dfc7-c07f-41d5-9172-12bc664a68e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.477 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e298050d-b1be-438b-b91a-9589b36627f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01e9047f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446012, 'reachable_time': 37165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271115, 'error': None, 'target': 'ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.496 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8b63cd0e-3930-42a1-9961-749b408af7c3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:7e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 446012, 'tstamp': 446012}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 271116, 'error': None, 'target': 'ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.511 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[de938d8a-651f-4da9-ada4-e62405fd8a0e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap01e9047f-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 68], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446012, 'reachable_time': 37165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 271117, 'error': None, 'target': 'ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.545 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4c94db60-b69c-4b6c-957e-ddc3e8c9ac76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.609 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[29cec5dd-3dc1-46d8-890e-89541b99c2a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.611 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01e9047f-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.611 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.611 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap01e9047f-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.613 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 NetworkManager[48899]: <info>  [1765599721.6140] manager: (tap01e9047f-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/115)
Dec 13 04:22:01 compute-0 kernel: tap01e9047f-f0: entered promiscuous mode
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.615 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.622 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap01e9047f-f0, col_values=(('external_ids', {'iface-id': '2da1d29f-b6c5-4ed2-b62c-1403a63a2a53'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:01 compute-0 ovn_controller[145204]: 2025-12-13T04:22:01Z|00202|binding|INFO|Releasing lport 2da1d29f-b6c5-4ed2-b62c-1403a63a2a53 from this chassis (sb_readonly=0)
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.623 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.624 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.637 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/01e9047f-f5cf-4bd5-a58c-1b5ed80cec97.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/01e9047f-f5cf-4bd5-a58c-1b5ed80cec97.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:22:01 compute-0 nova_compute[243704]: 2025-12-13 04:22:01.638 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.639 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b9311114-6aab-4419-9251-1ad866063e3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.640 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/01e9047f-f5cf-4bd5-a58c-1b5ed80cec97.pid.haproxy
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 01e9047f-f5cf-4bd5-a58c-1b5ed80cec97
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:22:01 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:01.641 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'env', 'PROCESS_TAG=haproxy-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/01e9047f-f5cf-4bd5-a58c-1b5ed80cec97.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:22:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Dec 13 04:22:02 compute-0 podman[271208]: 2025-12-13 04:22:02.019481667 +0000 UTC m=+0.058131457 container create dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:22:02 compute-0 systemd[1]: Started libpod-conmon-dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3.scope.
Dec 13 04:22:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e384 do_prune osdmap full prune enabled
Dec 13 04:22:02 compute-0 ceph-mon[75071]: pgmap v1522: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 38 KiB/s rd, 2.1 MiB/s wr, 50 op/s
Dec 13 04:22:02 compute-0 ceph-mon[75071]: osdmap e384: 3 total, 3 up, 3 in
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.064 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.065 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599722.0637796, a1553556-dd0c-4271-b7de-2c5739155591 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.065 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] VM Started (Lifecycle Event)
Dec 13 04:22:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e385 e385: 3 total, 3 up, 3 in
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.073 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:22:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2999b426e88c862b3f4f921219a63a3592f688e4cb25a253d19ecf41a92eeec/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e385: 3 total, 3 up, 3 in
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.083 243708 INFO nova.virt.libvirt.driver [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance spawned successfully.
Dec 13 04:22:02 compute-0 podman[271208]: 2025-12-13 04:22:01.989906795 +0000 UTC m=+0.028556605 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.084 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.092 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:02 compute-0 podman[271208]: 2025-12-13 04:22:02.09443997 +0000 UTC m=+0.133089810 container init dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.098 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:22:02 compute-0 podman[271208]: 2025-12-13 04:22:02.103451264 +0000 UTC m=+0.142101074 container start dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.111 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.112 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.112 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.112 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.113 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.113 243708 DEBUG nova.virt.libvirt.driver [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.116 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.117 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599722.065024, a1553556-dd0c-4271-b7de-2c5739155591 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.117 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] VM Paused (Lifecycle Event)
Dec 13 04:22:02 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [NOTICE]   (271228) : New worker (271230) forked
Dec 13 04:22:02 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [NOTICE]   (271228) : Loading success.
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.154 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.266 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599722.069174, a1553556-dd0c-4271-b7de-2c5739155591 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.267 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] VM Resumed (Lifecycle Event)
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.271 243708 INFO nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Took 8.61 seconds to spawn the instance on the hypervisor.
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.271 243708 DEBUG nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.291 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.293 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.328 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.347 243708 INFO nova.compute.manager [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Took 10.97 seconds to build instance.
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.363 243708 DEBUG oslo_concurrency.lockutils [None req-8efe5936-1839-4104-84f9-306522363145 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e385 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:02 compute-0 nova_compute[243704]: 2025-12-13 04:22:02.933 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.438 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:03 compute-0 ceph-mon[75071]: pgmap v1524: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 47 KiB/s rd, 2.7 MiB/s wr, 62 op/s
Dec 13 04:22:03 compute-0 ceph-mon[75071]: osdmap e385: 3 total, 3 up, 3 in
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.515 243708 DEBUG nova.compute.manager [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.516 243708 DEBUG oslo_concurrency.lockutils [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.517 243708 DEBUG oslo_concurrency.lockutils [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.517 243708 DEBUG oslo_concurrency.lockutils [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.517 243708 DEBUG nova.compute.manager [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] No waiting events found dispatching network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:03 compute-0 nova_compute[243704]: 2025-12-13 04:22:03.518 243708 WARNING nova.compute.manager [req-2ab6deed-2785-4c78-b586-6d35807cd655 req-5263b9e8-37d4-424f-98c0-f3f7c2f4b8a6 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received unexpected event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 for instance with vm_state active and task_state None.
Dec 13 04:22:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 124 op/s
Dec 13 04:22:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e385 do_prune osdmap full prune enabled
Dec 13 04:22:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 212 op/s
Dec 13 04:22:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e386 e386: 3 total, 3 up, 3 in
Dec 13 04:22:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 27 KiB/s wr, 151 op/s
Dec 13 04:22:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e386: 3 total, 3 up, 3 in
Dec 13 04:22:08 compute-0 nova_compute[243704]: 2025-12-13 04:22:08.046 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:08 compute-0 podman[271239]: 2025-12-13 04:22:08.074125445 +0000 UTC m=+0.205493773 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Dec 13 04:22:08 compute-0 ceph-mon[75071]: pgmap v1526: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 124 op/s
Dec 13 04:22:08 compute-0 nova_compute[243704]: 2025-12-13 04:22:08.440 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:08 compute-0 nova_compute[243704]: 2025-12-13 04:22:08.965 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:08 compute-0 nova_compute[243704]: 2025-12-13 04:22:08.966 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:08 compute-0 nova_compute[243704]: 2025-12-13 04:22:08.989 243708 DEBUG nova.objects.instance [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'flavor' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.026 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.060s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.575 243708 DEBUG nova.compute.manager [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-changed-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.576 243708 DEBUG nova.compute.manager [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Refreshing instance network info cache due to event network-changed-81069290-53e4-4b72-85b9-c14104457590. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.576 243708 DEBUG oslo_concurrency.lockutils [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.576 243708 DEBUG oslo_concurrency.lockutils [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.577 243708 DEBUG nova.network.neutron [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Refreshing network info cache for port 81069290-53e4-4b72-85b9-c14104457590 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:22:09 compute-0 ceph-mon[75071]: pgmap v1527: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.7 MiB/s wr, 212 op/s
Dec 13 04:22:09 compute-0 ceph-mon[75071]: pgmap v1528: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 27 KiB/s wr, 151 op/s
Dec 13 04:22:09 compute-0 ceph-mon[75071]: osdmap e386: 3 total, 3 up, 3 in
Dec 13 04:22:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 28 KiB/s wr, 153 op/s
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.832 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.835 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.836 243708 INFO nova.compute.manager [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Attaching volume 79579c3e-60a9-4798-bd84-84aad7d26057 to /dev/vdb
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.989 243708 DEBUG os_brick.utils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:22:09 compute-0 nova_compute[243704]: 2025-12-13 04:22:09.993 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.006 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.007 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[adc9db54-1af5-4e76-8fef-c005d47e83e0]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.009 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.020 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.021 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[141711d8-c189-4bcf-a2ce-4af21c66243a]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.024 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.037 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.037 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e828b621-473d-4c3c-be54-d0254558bb1e]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.038 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[40b1e135-08e9-4d01-935f-cf6f496d24f6]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.039 243708 DEBUG oslo_concurrency.processutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.072 243708 DEBUG oslo_concurrency.processutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.075 243708 DEBUG os_brick.initiator.connectors.lightos [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.075 243708 DEBUG os_brick.initiator.connectors.lightos [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.075 243708 DEBUG os_brick.initiator.connectors.lightos [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.076 243708 DEBUG os_brick.utils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] <== get_connector_properties: return (85ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.076 243708 DEBUG nova.virt.block_device [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating existing volume attachment record: 3e88fc11-398d-4a97-b44e-ccebade194e7 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.658 243708 DEBUG nova.network.neutron [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updated VIF entry in instance network info cache for port 81069290-53e4-4b72-85b9-c14104457590. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.659 243708 DEBUG nova.network.neutron [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updating instance_info_cache with network_info: [{"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e386 do_prune osdmap full prune enabled
Dec 13 04:22:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e387 e387: 3 total, 3 up, 3 in
Dec 13 04:22:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e387: 3 total, 3 up, 3 in
Dec 13 04:22:10 compute-0 nova_compute[243704]: 2025-12-13 04:22:10.693 243708 DEBUG oslo_concurrency.lockutils [req-7b991be0-a8ad-49f1-a82e-037d6745f039 req-18442f18-1043-4faf-99ca-0fa89f3b3033 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-a1553556-dd0c-4271-b7de-2c5739155591" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:22:10 compute-0 ceph-mon[75071]: pgmap v1530: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.9 MiB/s rd, 28 KiB/s wr, 153 op/s
Dec 13 04:22:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3600358829' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.063 243708 DEBUG os_brick.encryptors [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Using volume encryption metadata '{'encryption_key_id': 'fd1a8b7f-bdf3-429b-9285-c993a3ecba2b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-79579c3e-60a9-4798-bd84-84aad7d26057', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '79579c3e-60a9-4798-bd84-84aad7d26057', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '76079c07-6caa-4f82-8285-1ce2d2f6c0a8', 'attached_at': '', 'detached_at': '', 'volume_id': '79579c3e-60a9-4798-bd84-84aad7d26057', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.077 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.094 243708 DEBUG barbicanclient.v1.secrets [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.095 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.118 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.118 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.139 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.140 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.159 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.160 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.179 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.180 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.204 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.205 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.240 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.241 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.277 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.278 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.299 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.299 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.317 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.319 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.337 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.337 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.355 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.355 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.372 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.372 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.396 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.397 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.418 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.419 243708 INFO barbicanclient.base [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/fd1a8b7f-bdf3-429b-9285-c993a3ecba2b
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.455 243708 DEBUG barbicanclient.client [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.456 243708 DEBUG nova.virt.libvirt.host [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:22:11 compute-0 nova_compute[243704]:     <volume>79579c3e-60a9-4798-bd84-84aad7d26057</volume>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:22:11 compute-0 nova_compute[243704]: </secret>
Dec 13 04:22:11 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:22:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 112 op/s
Dec 13 04:22:11 compute-0 ceph-mon[75071]: osdmap e387: 3 total, 3 up, 3 in
Dec 13 04:22:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3600358829' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.823 243708 DEBUG nova.objects.instance [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'flavor' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.844 243708 DEBUG nova.virt.libvirt.driver [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Attempting to attach volume 79579c3e-60a9-4798-bd84-84aad7d26057 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:22:11 compute-0 nova_compute[243704]: 2025-12-13 04:22:11.849 243708 DEBUG nova.virt.libvirt.guest [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79579c3e-60a9-4798-bd84-84aad7d26057">
Dec 13 04:22:11 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   </source>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:22:11 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <serial>79579c3e-60a9-4798-bd84-84aad7d26057</serial>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:22:11 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="d91bdb13-6484-40cc-b1af-ff4ba4e76382"/>
Dec 13 04:22:11 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:22:11 compute-0 nova_compute[243704]: </disk>
Dec 13 04:22:11 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e387 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e387 do_prune osdmap full prune enabled
Dec 13 04:22:13 compute-0 nova_compute[243704]: 2025-12-13 04:22:13.048 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e388 e388: 3 total, 3 up, 3 in
Dec 13 04:22:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e388: 3 total, 3 up, 3 in
Dec 13 04:22:13 compute-0 ceph-mon[75071]: pgmap v1532: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 112 op/s
Dec 13 04:22:13 compute-0 nova_compute[243704]: 2025-12-13 04:22:13.443 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Dec 13 04:22:14 compute-0 ceph-mon[75071]: osdmap e388: 3 total, 3 up, 3 in
Dec 13 04:22:14 compute-0 ceph-mon[75071]: pgmap v1534: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 4.0 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Dec 13 04:22:14 compute-0 nova_compute[243704]: 2025-12-13 04:22:14.912 243708 DEBUG nova.virt.libvirt.driver [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:22:14 compute-0 nova_compute[243704]: 2025-12-13 04:22:14.912 243708 DEBUG nova.virt.libvirt.driver [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:22:14 compute-0 nova_compute[243704]: 2025-12-13 04:22:14.912 243708 DEBUG nova.virt.libvirt.driver [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:22:14 compute-0 nova_compute[243704]: 2025-12-13 04:22:14.913 243708 DEBUG nova.virt.libvirt.driver [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No VIF found with MAC fa:16:3e:d1:11:05, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:22:15 compute-0 nova_compute[243704]: 2025-12-13 04:22:15.376 243708 DEBUG oslo_concurrency.lockutils [None req-94118a64-f37e-4dc5-8447-8d2af4c5a627 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 5.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.392 243708 DEBUG oslo_concurrency.lockutils [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.393 243708 DEBUG oslo_concurrency.lockutils [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.406 243708 INFO nova.compute.manager [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Detaching volume 2e5dffee-496b-489e-a090-9b3ef09a90d6
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.526 243708 DEBUG oslo_concurrency.lockutils [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.527 243708 DEBUG oslo_concurrency.lockutils [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.535 243708 INFO nova.virt.block_device [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Attempting to driver detach volume 2e5dffee-496b-489e-a090-9b3ef09a90d6 from mountpoint /dev/vdb
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.539 243708 INFO nova.compute.manager [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Detaching volume 79579c3e-60a9-4798-bd84-84aad7d26057
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.549 243708 DEBUG nova.virt.libvirt.driver [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Attempting to detach device vdb from instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.550 243708 DEBUG nova.virt.libvirt.guest [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-2e5dffee-496b-489e-a090-9b3ef09a90d6">
Dec 13 04:22:16 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   </source>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <serial>2e5dffee-496b-489e-a090-9b3ef09a90d6</serial>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]: </disk>
Dec 13 04:22:16 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.708 243708 INFO nova.virt.block_device [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Attempting to driver detach volume 79579c3e-60a9-4798-bd84-84aad7d26057 from mountpoint /dev/vdb
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.828 243708 DEBUG os_brick.encryptors [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Using volume encryption metadata '{'encryption_key_id': 'fd1a8b7f-bdf3-429b-9285-c993a3ecba2b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-79579c3e-60a9-4798-bd84-84aad7d26057', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '79579c3e-60a9-4798-bd84-84aad7d26057', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '76079c07-6caa-4f82-8285-1ce2d2f6c0a8', 'attached_at': '', 'detached_at': '', 'volume_id': '79579c3e-60a9-4798-bd84-84aad7d26057', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.836 243708 DEBUG nova.virt.libvirt.driver [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Attempting to detach device vdb from instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.836 243708 DEBUG nova.virt.libvirt.guest [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79579c3e-60a9-4798-bd84-84aad7d26057">
Dec 13 04:22:16 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   </source>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <serial>79579c3e-60a9-4798-bd84-84aad7d26057</serial>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:22:16 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="d91bdb13-6484-40cc-b1af-ff4ba4e76382"/>
Dec 13 04:22:16 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:22:16 compute-0 nova_compute[243704]: </disk>
Dec 13 04:22:16 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:16 compute-0 nova_compute[243704]: 2025-12-13 04:22:16.909 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:17 compute-0 sudo[271292]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:22:17 compute-0 sudo[271292]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.561 243708 INFO nova.virt.libvirt.driver [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully detached device vdb from instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f from the persistent domain config.
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.562 243708 DEBUG nova.virt.libvirt.driver [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.562 243708 DEBUG nova.virt.libvirt.guest [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-2e5dffee-496b-489e-a090-9b3ef09a90d6">
Dec 13 04:22:17 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   </source>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <serial>2e5dffee-496b-489e-a090-9b3ef09a90d6</serial>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]: </disk>
Dec 13 04:22:17 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:22:17 compute-0 sudo[271292]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.579 243708 INFO nova.virt.libvirt.driver [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully detached device vdb from instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 from the persistent domain config.
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.580 243708 DEBUG nova.virt.libvirt.driver [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.580 243708 DEBUG nova.virt.libvirt.guest [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79579c3e-60a9-4798-bd84-84aad7d26057">
Dec 13 04:22:17 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   </source>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <serial>79579c3e-60a9-4798-bd84-84aad7d26057</serial>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   <encryption format="luks">
Dec 13 04:22:17 compute-0 nova_compute[243704]:     <secret type="passphrase" uuid="d91bdb13-6484-40cc-b1af-ff4ba4e76382"/>
Dec 13 04:22:17 compute-0 nova_compute[243704]:   </encryption>
Dec 13 04:22:17 compute-0 nova_compute[243704]: </disk>
Dec 13 04:22:17 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:22:17 compute-0 sudo[271318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:22:17 compute-0 sudo[271318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:17 compute-0 podman[271316]: 2025-12-13 04:22:17.652815723 +0000 UTC m=+0.076489264 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:22:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 40 op/s
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:22:17 compute-0 nova_compute[243704]: 2025-12-13 04:22:17.905 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.051 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.051 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.053 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.054 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.057 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e388 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e388 do_prune osdmap full prune enabled
Dec 13 04:22:18 compute-0 sudo[271318]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: pgmap v1535: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 39 KiB/s rd, 6.0 KiB/s wr, 42 op/s
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e389 e389: 3 total, 3 up, 3 in
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.445 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.456 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599738.456586, 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.458 243708 DEBUG nova.virt.libvirt.driver [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.462 243708 INFO nova.virt.libvirt.driver [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully detached device vdb from instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 from the live domain config.
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.463 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599738.4632666, 82d113ec-d32a-4dd6-b8f4-bab622ea377f => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.465 243708 DEBUG nova.virt.libvirt.driver [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.467 243708 INFO nova.virt.libvirt.driver [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully detached device vdb from instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f from the live domain config.
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e389: 3 total, 3 up, 3 in
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:22:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:22:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:18 compute-0 sudo[271394]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:22:18 compute-0 sudo[271394]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:18 compute-0 sudo[271394]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:18 compute-0 sudo[271419]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:22:18 compute-0 sudo[271419]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.821 243708 DEBUG nova.objects.instance [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:18 compute-0 nova_compute[243704]: 2025-12-13 04:22:18.824 243708 DEBUG nova.objects.instance [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'flavor' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:19 compute-0 podman[271455]: 2025-12-13 04:22:18.945076203 +0000 UTC m=+0.029428579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.270 243708 DEBUG oslo_concurrency.lockutils [None req-ed4679da-c5fb-4eae-a4d3-8123865f9e04 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.272 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 2.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.272 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.272 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.272 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.274 243708 INFO nova.compute.manager [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Terminating instance
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.274 243708 DEBUG nova.compute.manager [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.434 243708 DEBUG oslo_concurrency.lockutils [None req-c86d0a80-5bc1-4ad8-b073-82c5f7eb0c60 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 2.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.614 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating instance_info_cache with network_info: [{"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:19 compute-0 podman[271455]: 2025-12-13 04:22:19.7087742 +0000 UTC m=+0.793126586 container create ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:22:19 compute-0 ceph-mon[75071]: pgmap v1536: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 37 KiB/s rd, 4.4 KiB/s wr, 40 op/s
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:19 compute-0 ceph-mon[75071]: osdmap e389: 3 total, 3 up, 3 in
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:22:19 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.725 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-76079c07-6caa-4f82-8285-1ce2d2f6c0a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.728 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.729 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.730 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.755 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.756 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.756 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.756 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.756 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:19 compute-0 systemd[1]: Started libpod-conmon-ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740.scope.
Dec 13 04:22:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 141 KiB/s rd, 3.0 MiB/s wr, 92 op/s
Dec 13 04:22:19 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:19 compute-0 podman[271455]: 2025-12-13 04:22:19.850053861 +0000 UTC m=+0.934406217 container init ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:22:19 compute-0 podman[271455]: 2025-12-13 04:22:19.857612225 +0000 UTC m=+0.941964581 container start ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:22:19 compute-0 intelligent_euler[271472]: 167 167
Dec 13 04:22:19 compute-0 systemd[1]: libpod-ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740.scope: Deactivated successfully.
Dec 13 04:22:19 compute-0 kernel: tap39966274-17 (unregistering): left promiscuous mode
Dec 13 04:22:19 compute-0 NetworkManager[48899]: <info>  [1765599739.8736] device (tap39966274-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:22:19 compute-0 ovn_controller[145204]: 2025-12-13T04:22:19Z|00203|binding|INFO|Releasing lport 39966274-17ef-4b21-91cd-f57096630a08 from this chassis (sb_readonly=0)
Dec 13 04:22:19 compute-0 ovn_controller[145204]: 2025-12-13T04:22:19Z|00204|binding|INFO|Setting lport 39966274-17ef-4b21-91cd-f57096630a08 down in Southbound
Dec 13 04:22:19 compute-0 ovn_controller[145204]: 2025-12-13T04:22:19Z|00205|binding|INFO|Removing iface tap39966274-17 ovn-installed in OVS
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.939 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:19 compute-0 nova_compute[243704]: 2025-12-13 04:22:19.952 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:19 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Deactivated successfully.
Dec 13 04:22:19 compute-0 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d00000014.scope: Consumed 18.725s CPU time.
Dec 13 04:22:19 compute-0 systemd-machined[206767]: Machine qemu-20-instance-00000014 terminated.
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.036 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:be:bc:47 10.100.0.5'], port_security=['fa:16:3e:be:bc:47 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '82d113ec-d32a-4dd6-b8f4-bab622ea377f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75b261e8b1c44ab8b079f57244a812c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f05bcc9b-944b-48d9-ae53-ba48ad133a97', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d2e886-04ee-44a8-8e42-fd2f33ff96d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=39966274-17ef-4b21-91cd-f57096630a08) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.038 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 39966274-17ef-4b21-91cd-f57096630a08 in datapath 0f93b436-b78f-4a08-8363-5ff70f1f85b9 unbound from our chassis
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.040 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f93b436-b78f-4a08-8363-5ff70f1f85b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:22:20 compute-0 podman[271455]: 2025-12-13 04:22:20.040512885 +0000 UTC m=+1.124865251 container attach ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:22:20 compute-0 podman[271455]: 2025-12-13 04:22:20.041457881 +0000 UTC m=+1.125810237 container died ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.041 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d3c5bb38-db18-4237-a1a3-251f5bd8accf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.041 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 namespace which is not needed anymore
Dec 13 04:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-b9292d3152cce5a4c3accecc857852a2fc67462fc8d5bcf8dc3fc3c350596c97-merged.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.106 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.107 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.107 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.107 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.107 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.110 243708 INFO nova.compute.manager [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Terminating instance
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.112 243708 DEBUG nova.compute.manager [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:22:20 compute-0 podman[271455]: 2025-12-13 04:22:20.126935098 +0000 UTC m=+1.211287454 container remove ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_euler, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.125 243708 INFO nova.virt.libvirt.driver [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Instance destroyed successfully.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.133 243708 DEBUG nova.objects.instance [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'resources' on Instance uuid 82d113ec-d32a-4dd6-b8f4-bab622ea377f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:20 compute-0 systemd[1]: libpod-conmon-ab801ecbef0a0a844b4794e8fffe50589de218e3c7f274c7053c9e8caeda0740.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.147 243708 DEBUG nova.virt.libvirt.vif [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:21:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1073473030',display_name='tempest-VolumesSnapshotTestJSON-instance-1073473030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1073473030',id=20,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKPnTXWA/nRRsLHWrwXAHms3VRz6l/lbjc4hB16QPBAiHUqhG7ID6+zyLAzbkNvKYrpOjixr8f39czXdR92AR1H4axBtfRdy5Zuwva9dLrUra+4xXkGSoq6ZlYmYAsOcrQ==',key_name='tempest-keypair-1808277459',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:21:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-f8p6ns36',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:21:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=82d113ec-d32a-4dd6-b8f4-bab622ea377f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.148 243708 DEBUG nova.network.os_vif_util [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "39966274-17ef-4b21-91cd-f57096630a08", "address": "fa:16:3e:be:bc:47", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap39966274-17", "ovs_interfaceid": "39966274-17ef-4b21-91cd-f57096630a08", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.148 243708 DEBUG nova.network.os_vif_util [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.149 243708 DEBUG os_vif [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.153 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.154 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap39966274-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.156 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.159 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.161 243708 INFO os_vif [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:be:bc:47,bridge_name='br-int',has_traffic_filtering=True,id=39966274-17ef-4b21-91cd-f57096630a08,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap39966274-17')
Dec 13 04:22:20 compute-0 kernel: tap7ab6a504-51 (unregistering): left promiscuous mode
Dec 13 04:22:20 compute-0 NetworkManager[48899]: <info>  [1765599740.1673] device (tap7ab6a504-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:22:20 compute-0 ovn_controller[145204]: 2025-12-13T04:22:20Z|00206|binding|INFO|Releasing lport 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 from this chassis (sb_readonly=0)
Dec 13 04:22:20 compute-0 ovn_controller[145204]: 2025-12-13T04:22:20Z|00207|binding|INFO|Setting lport 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 down in Southbound
Dec 13 04:22:20 compute-0 ovn_controller[145204]: 2025-12-13T04:22:20Z|00208|binding|INFO|Removing iface tap7ab6a504-51 ovn-installed in OVS
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.178 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d1:11:05 10.100.0.14'], port_security=['fa:16:3e:d1:11:05 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '76079c07-6caa-4f82-8285-1ce2d2f6c0a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '4', 'neutron:security_group_ids': '839929bc-ac81-4da1-84c1-1de9fc403e53', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.210'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=7ab6a504-5168-444a-8e2d-d3cfb84bbe35) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.193 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [NOTICE]   (270012) : haproxy version is 2.8.14-c23fe91
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [NOTICE]   (270012) : path to executable is /usr/sbin/haproxy
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [ALERT]    (270012) : Current worker (270017) exited with code 143 (Terminated)
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[270006]: [WARNING]  (270012) : All workers exited. Exiting... (0)
Dec 13 04:22:20 compute-0 systemd[1]: libpod-ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000015.scope: Consumed 19.310s CPU time.
Dec 13 04:22:20 compute-0 systemd-machined[206767]: Machine qemu-21-instance-00000015 terminated.
Dec 13 04:22:20 compute-0 podman[271541]: 2025-12-13 04:22:20.240527008 +0000 UTC m=+0.067900062 container died ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8-userdata-shm.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-57072a8edc9069627fcb9b66ed786cc5dc0e26f0b7d7714fdb792578bd97b939-merged.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 podman[271541]: 2025-12-13 04:22:20.287319647 +0000 UTC m=+0.114692701 container cleanup ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:20 compute-0 systemd[1]: libpod-conmon-ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 podman[271592]: 2025-12-13 04:22:20.341808514 +0000 UTC m=+0.052859314 container create f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.351 243708 INFO nova.virt.libvirt.driver [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Instance destroyed successfully.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.351 243708 DEBUG nova.objects.instance [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'resources' on Instance uuid 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:20 compute-0 podman[271600]: 2025-12-13 04:22:20.362448135 +0000 UTC m=+0.053118542 container remove ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.370 243708 DEBUG nova.virt.libvirt.vif [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:21:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1946557945',display_name='tempest-TestEncryptedCinderVolumes-server-1946557945',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1946557945',id=21,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDO7SwNyWikMosTS35n5vqyRQITWepB5C2NwwuFSUchyBEe9nlIjiUO8yORLAN0grWozQJ2L9NBxakLxbVlRFLObQy0bBXmx0nBvUiDPIPhHiffZWEm7lZhQW+gG+qScFw==',key_name='tempest-keypair-2106823029',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:21:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-e090j3iu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:21:31Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=76079c07-6caa-4f82-8285-1ce2d2f6c0a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.371 243708 DEBUG nova.network.os_vif_util [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "address": "fa:16:3e:d1:11:05", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.210", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7ab6a504-51", "ovs_interfaceid": "7ab6a504-5168-444a-8e2d-d3cfb84bbe35", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.370 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[72e6599b-c462-4a4d-ab56-10d9fe9a9536]: (4, ('Sat Dec 13 04:22:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 (ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8)\nea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8\nSat Dec 13 04:22:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 (ea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8)\nea927e287393a126792c7a548a93076f703bede671ab92b6535b17505ef154f8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.371 243708 DEBUG nova.network.os_vif_util [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.372 243708 DEBUG os_vif [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.373 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[08bbfd16-8696-46a2-97aa-25e89f16c05d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.374 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f93b436-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.374 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.375 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ab6a504-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:20 compute-0 kernel: tap0f93b436-b0: left promiscuous mode
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.380 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.400 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.402 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4a046261-83b2-467f-acc6-02ce71062cbb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.405 243708 INFO os_vif [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d1:11:05,bridge_name='br-int',has_traffic_filtering=True,id=7ab6a504-5168-444a-8e2d-d3cfb84bbe35,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7ab6a504-51')
Dec 13 04:22:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3369039064' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:20 compute-0 podman[271592]: 2025-12-13 04:22:20.319927601 +0000 UTC m=+0.030978431 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:20 compute-0 systemd[1]: Started libpod-conmon-f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9.scope.
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.417 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e0240ca0-a4c2-45de-8b00-f3f2029269b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.419 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f0353e48-8d6a-4e3c-832b-ecfe6b01bd18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.436 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[00ea1af1-f610-4a58-85ef-f5437b901249]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441416, 'reachable_time': 25814, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271641, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.443 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.443 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[f0125642-3377-496b-a1a9-4723bdb7b0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.444 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 7ab6a504-5168-444a-8e2d-d3cfb84bbe35 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 unbound from our chassis
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.446 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.447 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[501189a7-6ca9-4f26-9e59-768f7ee3ad11]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.448 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace which is not needed anymore
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.448 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.691s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:20 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:20 compute-0 podman[271592]: 2025-12-13 04:22:20.490367582 +0000 UTC m=+0.201418382 container init f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:20 compute-0 podman[271592]: 2025-12-13 04:22:20.4991505 +0000 UTC m=+0.210201300 container start f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:22:20 compute-0 podman[271592]: 2025-12-13 04:22:20.50318175 +0000 UTC m=+0.214232550 container attach f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.537 243708 INFO nova.virt.libvirt.driver [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Deleting instance files /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f_del
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.537 243708 INFO nova.virt.libvirt.driver [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Deletion of /var/lib/nova/instances/82d113ec-d32a-4dd6-b8f4-bab622ea377f_del complete
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.546 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.546 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.549 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.549 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.552 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.553 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.553 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000016 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [NOTICE]   (270646) : haproxy version is 2.8.14-c23fe91
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [NOTICE]   (270646) : path to executable is /usr/sbin/haproxy
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [WARNING]  (270646) : Exiting Master process...
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [ALERT]    (270646) : Current worker (270648) exited with code 143 (Terminated)
Dec 13 04:22:20 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[270642]: [WARNING]  (270646) : All workers exited. Exiting... (0)
Dec 13 04:22:20 compute-0 systemd[1]: libpod-10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 podman[271674]: 2025-12-13 04:22:20.60974642 +0000 UTC m=+0.059179046 container died 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.611 243708 INFO nova.compute.manager [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Took 1.34 seconds to destroy the instance on the hypervisor.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.612 243708 DEBUG oslo.service.loopingcall [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.613 243708 DEBUG nova.compute.manager [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.613 243708 DEBUG nova.network.neutron [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.640 243708 DEBUG nova.compute.manager [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-unplugged-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.641 243708 DEBUG oslo_concurrency.lockutils [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.642 243708 DEBUG oslo_concurrency.lockutils [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.642 243708 DEBUG oslo_concurrency.lockutils [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.642 243708 DEBUG nova.compute.manager [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] No waiting events found dispatching network-vif-unplugged-39966274-17ef-4b21-91cd-f57096630a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.643 243708 DEBUG nova.compute.manager [req-28b75e9a-9c80-4052-a8ad-6238e50265b3 req-caf20677-24e5-498f-9454-fdded30a49c7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-unplugged-39966274-17ef-4b21-91cd-f57096630a08 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:22:20 compute-0 podman[271674]: 2025-12-13 04:22:20.653713952 +0000 UTC m=+0.103146568 container cleanup 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:22:20 compute-0 systemd[1]: libpod-conmon-10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953.scope: Deactivated successfully.
Dec 13 04:22:20 compute-0 ceph-mon[75071]: pgmap v1538: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 141 KiB/s rd, 3.0 MiB/s wr, 92 op/s
Dec 13 04:22:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3369039064' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a8e96d0526a3ec425b103806d7b827251defcf05a73f72676b5abaed4440470-merged.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953-userdata-shm.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f93b436\x2db78f\x2d4a08\x2d8363\x2d5ff70f1f85b9.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 podman[271703]: 2025-12-13 04:22:20.737446462 +0000 UTC m=+0.059504775 container remove 10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.743 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f31e8b71-c1ae-4510-9e00-e525ea9a6440]: (4, ('Sat Dec 13 04:22:20 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953)\n10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953\nSat Dec 13 04:22:20 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953)\n10b26c6341b479d6e40a302876467eb0f5afd6dff30bb6f75879f9fead336953\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.744 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[de603532-d97d-4668-b4f3-83792ce23687]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.744 243708 INFO nova.virt.libvirt.driver [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Deleting instance files /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8_del
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.745 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.745 243708 INFO nova.virt.libvirt.driver [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Deletion of /var/lib/nova/instances/76079c07-6caa-4f82-8285-1ce2d2f6c0a8_del complete
Dec 13 04:22:20 compute-0 kernel: tap87c0a2c3-50: left promiscuous mode
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.752 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.755 243708 DEBUG nova.compute.manager [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-unplugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.755 243708 DEBUG oslo_concurrency.lockutils [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.755 243708 DEBUG oslo_concurrency.lockutils [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.756 243708 DEBUG oslo_concurrency.lockutils [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.756 243708 DEBUG nova.compute.manager [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] No waiting events found dispatching network-vif-unplugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.756 243708 DEBUG nova.compute.manager [req-b21f7e0c-0e2b-4b49-a733-b794d2ea7d1a req-a36a8bd9-2523-47d3-b882-8a50918f2ff1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-unplugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.761 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.764 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[30a02fe6-5c88-4018-bd14-2491d27d23fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.779 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[18ffa312-772e-4804-a932-aedf60b0dab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.780 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f685e0e5-b5ba-44d7-9931-4f06cc9d4899]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.795 243708 INFO nova.compute.manager [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Took 0.68 seconds to destroy the instance on the hypervisor.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.795 243708 DEBUG oslo.service.loopingcall [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.795 243708 DEBUG nova.compute.manager [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.795 243708 DEBUG nova.network.neutron [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.799 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b9d45116-4bfc-4708-be0d-915a129e35b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 442921, 'reachable_time': 30022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 271719, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 systemd[1]: run-netns-ovnmeta\x2d87c0a2c3\x2d5f67\x2d431b\x2d9b32\x2da688ddc2bc06.mount: Deactivated successfully.
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.804 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:22:20 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:20.804 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[70165942-6979-4771-845d-fd878323f4af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.860 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.861 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4118MB free_disk=59.87581685744226GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.861 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:20 compute-0 nova_compute[243704]: 2025-12-13 04:22:20.862 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.028 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.028 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.029 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance a1553556-dd0c-4271-b7de-2c5739155591 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.029 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.029 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:22:21 compute-0 loving_borg[271640]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:22:21 compute-0 loving_borg[271640]: --> All data devices are unavailable
Dec 13 04:22:21 compute-0 systemd[1]: libpod-f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9.scope: Deactivated successfully.
Dec 13 04:22:21 compute-0 podman[271592]: 2025-12-13 04:22:21.083856415 +0000 UTC m=+0.794907215 container died f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-0fb8d48a1d4834ddfa224675a3a7fb98656693a347a9ccba2c478d756932a875-merged.mount: Deactivated successfully.
Dec 13 04:22:21 compute-0 podman[271592]: 2025-12-13 04:22:21.132158424 +0000 UTC m=+0.843209234 container remove f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_borg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:21 compute-0 systemd[1]: libpod-conmon-f65af34037cfbd59d0dae61adae60435b723bc6e64b9413cc421d418ad2d70e9.scope: Deactivated successfully.
Dec 13 04:22:21 compute-0 sudo[271419]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.178 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:21 compute-0 sudo[271745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:22:21 compute-0 sudo[271745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:21 compute-0 sudo[271745]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:21 compute-0 sudo[271771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:22:21 compute-0 sudo[271771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.494 243708 DEBUG nova.network.neutron [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.508 243708 INFO nova.compute.manager [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Took 0.89 seconds to deallocate network for instance.
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.615 243708 WARNING nova.volume.cinder [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Attachment 5271eff2-0fc5-425b-b5c2-dd03cdf67d89 does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = 5271eff2-0fc5-425b-b5c2-dd03cdf67d89. (HTTP 404) (Request-ID: req-af2e8298-7602-468f-b9e1-b18fb11755f2)
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.616 243708 INFO nova.compute.manager [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Took 0.11 seconds to detach 1 volumes for instance.
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.616752964 +0000 UTC m=+0.039197964 container create d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:22:21 compute-0 systemd[1]: Started libpod-conmon-d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa.scope.
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.665 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.688465539 +0000 UTC m=+0.110910549 container init d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.599962898 +0000 UTC m=+0.022407928 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.696662221 +0000 UTC m=+0.119107221 container start d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.699785035 +0000 UTC m=+0.122230055 container attach d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:22:21 compute-0 angry_goodall[271840]: 167 167
Dec 13 04:22:21 compute-0 systemd[1]: libpod-d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa.scope: Deactivated successfully.
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.703418304 +0000 UTC m=+0.125863304 container died d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:22:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-779435b885cb5b2831f8f5ac7dc43b00287250e977beef8ab7b47b70a2512379-merged.mount: Deactivated successfully.
Dec 13 04:22:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3469955254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:21 compute-0 podman[271825]: 2025-12-13 04:22:21.747972822 +0000 UTC m=+0.170417812 container remove d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_goodall, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.757 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:21 compute-0 systemd[1]: libpod-conmon-d616f6b2b7632a51f1c83878d38eb0d2b0b3b94e3d8c51b9f08a025624a536aa.scope: Deactivated successfully.
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.765 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.777 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:22:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3469955254' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 134 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.797 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.797 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.798 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.899 243708 DEBUG oslo_concurrency.processutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:21 compute-0 podman[271867]: 2025-12-13 04:22:21.922747491 +0000 UTC m=+0.040106669 container create 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.937 243708 DEBUG nova.network.neutron [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.945 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.945 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.946 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:21 compute-0 nova_compute[243704]: 2025-12-13 04:22:21.955 243708 INFO nova.compute.manager [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Took 1.16 seconds to deallocate network for instance.
Dec 13 04:22:21 compute-0 systemd[1]: Started libpod-conmon-568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f.scope.
Dec 13 04:22:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d785f171a026cdff17ea1f029ca820fefd767dd8f2298d6f91dfae741e1b599a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d785f171a026cdff17ea1f029ca820fefd767dd8f2298d6f91dfae741e1b599a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d785f171a026cdff17ea1f029ca820fefd767dd8f2298d6f91dfae741e1b599a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d785f171a026cdff17ea1f029ca820fefd767dd8f2298d6f91dfae741e1b599a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:21.905995277 +0000 UTC m=+0.023354475 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.002 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:22.00420422 +0000 UTC m=+0.121563408 container init 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:22.012867955 +0000 UTC m=+0.130227123 container start 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:22.017372876 +0000 UTC m=+0.134732044 container attach 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:22:22 compute-0 podman[271886]: 2025-12-13 04:22:22.042790345 +0000 UTC m=+0.059917405 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:22:22 compute-0 ovn_controller[145204]: 2025-12-13T04:22:22Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:3c:a1:58 10.100.0.12
Dec 13 04:22:22 compute-0 ovn_controller[145204]: 2025-12-13T04:22:22Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:3c:a1:58 10.100.0.12
Dec 13 04:22:22 compute-0 musing_ganguly[271884]: {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     "0": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "devices": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "/dev/loop3"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             ],
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_name": "ceph_lv0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_size": "21470642176",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "name": "ceph_lv0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "tags": {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_name": "ceph",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.crush_device_class": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.encrypted": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.objectstore": "bluestore",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_id": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.vdo": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.with_tpm": "0"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             },
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "vg_name": "ceph_vg0"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         }
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     ],
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     "1": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "devices": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "/dev/loop4"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             ],
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_name": "ceph_lv1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_size": "21470642176",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "name": "ceph_lv1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "tags": {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_name": "ceph",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.crush_device_class": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.encrypted": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.objectstore": "bluestore",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_id": "1",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.vdo": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.with_tpm": "0"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             },
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "vg_name": "ceph_vg1"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         }
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     ],
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     "2": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "devices": [
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "/dev/loop5"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             ],
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_name": "ceph_lv2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_size": "21470642176",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "name": "ceph_lv2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "tags": {
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.cluster_name": "ceph",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.crush_device_class": "",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.encrypted": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.objectstore": "bluestore",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osd_id": "2",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.vdo": "0",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:                 "ceph.with_tpm": "0"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             },
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "type": "block",
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:             "vg_name": "ceph_vg2"
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:         }
Dec 13 04:22:22 compute-0 musing_ganguly[271884]:     ]
Dec 13 04:22:22 compute-0 musing_ganguly[271884]: }
Dec 13 04:22:22 compute-0 systemd[1]: libpod-568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f.scope: Deactivated successfully.
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:22.319424346 +0000 UTC m=+0.436783514 container died 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d785f171a026cdff17ea1f029ca820fefd767dd8f2298d6f91dfae741e1b599a-merged.mount: Deactivated successfully.
Dec 13 04:22:22 compute-0 podman[271867]: 2025-12-13 04:22:22.380352289 +0000 UTC m=+0.497711457 container remove 568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ganguly, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:22:22 compute-0 systemd[1]: libpod-conmon-568c2a405e2b79866f9e70cc688de541449473f131a0c931b06f7ec1e45d1d5f.scope: Deactivated successfully.
Dec 13 04:22:22 compute-0 sudo[271771]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626555256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.479 243708 DEBUG oslo_concurrency.processutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.487 243708 DEBUG nova.compute.provider_tree [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.500 243708 DEBUG nova.scheduler.client.report [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:22:22 compute-0 sudo[271942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:22:22 compute-0 sudo[271942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:22 compute-0 sudo[271942]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.521 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.524 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.545 243708 INFO nova.scheduler.client.report [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Deleted allocations for instance 82d113ec-d32a-4dd6-b8f4-bab622ea377f
Dec 13 04:22:22 compute-0 sudo[271969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:22:22 compute-0 sudo[271969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.596 243708 DEBUG oslo_concurrency.processutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.628 243708 DEBUG oslo_concurrency.lockutils [None req-e41f2cb3-513b-48e8-b528-db36cf2785a3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.356s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.728 243708 DEBUG nova.compute.manager [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.729 243708 DEBUG oslo_concurrency.lockutils [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.729 243708 DEBUG oslo_concurrency.lockutils [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.729 243708 DEBUG oslo_concurrency.lockutils [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "82d113ec-d32a-4dd6-b8f4-bab622ea377f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.729 243708 DEBUG nova.compute.manager [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] No waiting events found dispatching network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.729 243708 WARNING nova.compute.manager [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received unexpected event network-vif-plugged-39966274-17ef-4b21-91cd-f57096630a08 for instance with vm_state deleted and task_state None.
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.730 243708 DEBUG nova.compute.manager [req-6ef455cd-24b6-4db6-9f42-591ae89655dd req-1f5a8cb3-6b99-47b8-8b69-b0a13797218b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Received event network-vif-deleted-39966274-17ef-4b21-91cd-f57096630a08 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:22 compute-0 ceph-mon[75071]: pgmap v1539: 305 pgs: 305 active+clean; 2.5 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 134 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Dec 13 04:22:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2626555256' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.831 243708 DEBUG nova.compute.manager [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.831 243708 DEBUG oslo_concurrency.lockutils [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.831 243708 DEBUG oslo_concurrency.lockutils [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.832 243708 DEBUG oslo_concurrency.lockutils [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.832 243708 DEBUG nova.compute.manager [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] No waiting events found dispatching network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.832 243708 WARNING nova.compute.manager [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received unexpected event network-vif-plugged-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 for instance with vm_state deleted and task_state None.
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.832 243708 DEBUG nova.compute.manager [req-95cddf78-56d2-4f01-8445-2ffaf3208f54 req-4dc76a61-4210-4ed5-895f-54d3cca2bf85 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Received event network-vif-deleted-7ab6a504-5168-444a-8e2d-d3cfb84bbe35 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.856054597 +0000 UTC m=+0.039391339 container create 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:22:22 compute-0 nova_compute[243704]: 2025-12-13 04:22:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:22 compute-0 systemd[1]: Started libpod-conmon-8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb.scope.
Dec 13 04:22:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.839832947 +0000 UTC m=+0.023169709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.940757294 +0000 UTC m=+0.124094056 container init 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.948478943 +0000 UTC m=+0.131815685 container start 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:22:22 compute-0 xenodochial_northcutt[272042]: 167 167
Dec 13 04:22:22 compute-0 systemd[1]: libpod-8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb.scope: Deactivated successfully.
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.952733508 +0000 UTC m=+0.136070300 container attach 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:22:22 compute-0 podman[272025]: 2025-12-13 04:22:22.954746763 +0000 UTC m=+0.138083525 container died 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:22:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-1644cfd4ae735289feba237d1f951d9f7253e27395a1cf9f6d2c205725b23c00-merged.mount: Deactivated successfully.
Dec 13 04:22:23 compute-0 podman[272025]: 2025-12-13 04:22:23.004382799 +0000 UTC m=+0.187719541 container remove 8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_northcutt, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:23 compute-0 systemd[1]: libpod-conmon-8fc55022b2ba8e34165badd8b9aa39210c08a00cb41cdf6e05faacdc4bc670bb.scope: Deactivated successfully.
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.053 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:23 compute-0 podman[272065]: 2025-12-13 04:22:23.220350205 +0000 UTC m=+0.043874851 container create 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:22:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739545516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e389 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e389 do_prune osdmap full prune enabled
Dec 13 04:22:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e390 e390: 3 total, 3 up, 3 in
Dec 13 04:22:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e390: 3 total, 3 up, 3 in
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.252 243708 DEBUG oslo_concurrency.processutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.656s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.258 243708 DEBUG nova.compute.provider_tree [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:22:23 compute-0 systemd[1]: Started libpod-conmon-4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d.scope.
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.271 243708 DEBUG nova.scheduler.client.report [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:22:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5644a0d6e710328accb4d8c905a849f9019be36cdd75d9d954e91cf3cc71e94e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5644a0d6e710328accb4d8c905a849f9019be36cdd75d9d954e91cf3cc71e94e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5644a0d6e710328accb4d8c905a849f9019be36cdd75d9d954e91cf3cc71e94e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5644a0d6e710328accb4d8c905a849f9019be36cdd75d9d954e91cf3cc71e94e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.291 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.768s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:23 compute-0 podman[272065]: 2025-12-13 04:22:23.20357117 +0000 UTC m=+0.027095836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:22:23 compute-0 podman[272065]: 2025-12-13 04:22:23.303653823 +0000 UTC m=+0.127178499 container init 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:22:23 compute-0 podman[272065]: 2025-12-13 04:22:23.311402433 +0000 UTC m=+0.134927079 container start 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.313 243708 INFO nova.scheduler.client.report [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Deleted allocations for instance 76079c07-6caa-4f82-8285-1ce2d2f6c0a8
Dec 13 04:22:23 compute-0 podman[272065]: 2025-12-13 04:22:23.314626651 +0000 UTC m=+0.138151327 container attach 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.369 243708 DEBUG oslo_concurrency.lockutils [None req-22737700-e9ae-41c0-80d3-0b09e678064c 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "76079c07-6caa-4f82-8285-1ce2d2f6c0a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.262s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:23 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:23.411 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:23 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:23.412 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.412 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 260 KiB/s rd, 3.1 MiB/s wr, 113 op/s
Dec 13 04:22:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/739545516' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:23 compute-0 ceph-mon[75071]: osdmap e390: 3 total, 3 up, 3 in
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:22:23 compute-0 nova_compute[243704]: 2025-12-13 04:22:23.895 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:22:23 compute-0 lvm[272161]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:22:23 compute-0 lvm[272161]: VG ceph_vg0 finished
Dec 13 04:22:23 compute-0 lvm[272162]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:22:23 compute-0 lvm[272162]: VG ceph_vg1 finished
Dec 13 04:22:24 compute-0 lvm[272164]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:22:24 compute-0 lvm[272164]: VG ceph_vg2 finished
Dec 13 04:22:24 compute-0 xenodochial_visvesvaraya[272083]: {}
Dec 13 04:22:24 compute-0 systemd[1]: libpod-4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d.scope: Deactivated successfully.
Dec 13 04:22:24 compute-0 systemd[1]: libpod-4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d.scope: Consumed 1.357s CPU time.
Dec 13 04:22:24 compute-0 podman[272065]: 2025-12-13 04:22:24.146184438 +0000 UTC m=+0.969709114 container died 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:22:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-5644a0d6e710328accb4d8c905a849f9019be36cdd75d9d954e91cf3cc71e94e-merged.mount: Deactivated successfully.
Dec 13 04:22:24 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:24.414 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:25 compute-0 nova_compute[243704]: 2025-12-13 04:22:25.381 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 719 KiB/s rd, 3.2 MiB/s wr, 235 op/s
Dec 13 04:22:25 compute-0 nova_compute[243704]: 2025-12-13 04:22:25.895 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:25 compute-0 nova_compute[243704]: 2025-12-13 04:22:25.895 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:22:26 compute-0 ceph-mon[75071]: pgmap v1541: 305 pgs: 305 active+clean; 2.4 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 260 KiB/s rd, 3.1 MiB/s wr, 113 op/s
Dec 13 04:22:26 compute-0 podman[272065]: 2025-12-13 04:22:26.365106062 +0000 UTC m=+3.188630708 container remove 4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_visvesvaraya, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:22:26 compute-0 sudo[271969]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:22:26 compute-0 systemd[1]: libpod-conmon-4c4a5b3e7b6dd50276eba51b6ab5cfcb8497f3345b902d6de886b5061b0b977d.scope: Deactivated successfully.
Dec 13 04:22:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:22:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:26 compute-0 sudo[272179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:22:26 compute-0 sudo[272179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:22:26 compute-0 sudo[272179]: pam_unix(sudo:session): session closed for user root
Dec 13 04:22:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:22:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2593579572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:22:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2593579572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:26 compute-0 nova_compute[243704]: 2025-12-13 04:22:26.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:26 compute-0 nova_compute[243704]: 2025-12-13 04:22:26.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:22:27 compute-0 ceph-mon[75071]: pgmap v1542: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 719 KiB/s rd, 3.2 MiB/s wr, 235 op/s
Dec 13 04:22:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:22:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2593579572' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2593579572' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 620 KiB/s rd, 2.8 MiB/s wr, 203 op/s
Dec 13 04:22:28 compute-0 nova_compute[243704]: 2025-12-13 04:22:28.055 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e390 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e390 do_prune osdmap full prune enabled
Dec 13 04:22:28 compute-0 ceph-mon[75071]: pgmap v1543: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 620 KiB/s rd, 2.8 MiB/s wr, 203 op/s
Dec 13 04:22:28 compute-0 nova_compute[243704]: 2025-12-13 04:22:28.892 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e391 e391: 3 total, 3 up, 3 in
Dec 13 04:22:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e391: 3 total, 3 up, 3 in
Dec 13 04:22:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 635 KiB/s rd, 269 KiB/s wr, 212 op/s
Dec 13 04:22:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:22:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192593530' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:22:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/192593530' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:29 compute-0 ceph-mon[75071]: osdmap e391: 3 total, 3 up, 3 in
Dec 13 04:22:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/192593530' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/192593530' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:30 compute-0 nova_compute[243704]: 2025-12-13 04:22:30.384 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:22:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833394207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:22:30 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3833394207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e391 do_prune osdmap full prune enabled
Dec 13 04:22:30 compute-0 ceph-mon[75071]: pgmap v1545: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 635 KiB/s rd, 269 KiB/s wr, 212 op/s
Dec 13 04:22:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3833394207' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3833394207' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e392 e392: 3 total, 3 up, 3 in
Dec 13 04:22:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e392: 3 total, 3 up, 3 in
Dec 13 04:22:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 480 KiB/s rd, 182 KiB/s wr, 151 op/s
Dec 13 04:22:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e392 do_prune osdmap full prune enabled
Dec 13 04:22:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e393 e393: 3 total, 3 up, 3 in
Dec 13 04:22:31 compute-0 ceph-mon[75071]: osdmap e392: 3 total, 3 up, 3 in
Dec 13 04:22:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e393: 3 total, 3 up, 3 in
Dec 13 04:22:32 compute-0 ceph-mon[75071]: pgmap v1547: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 480 KiB/s rd, 182 KiB/s wr, 151 op/s
Dec 13 04:22:32 compute-0 ceph-mon[75071]: osdmap e393: 3 total, 3 up, 3 in
Dec 13 04:22:33 compute-0 nova_compute[243704]: 2025-12-13 04:22:33.057 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 57 KiB/s rd, 29 KiB/s wr, 77 op/s
Dec 13 04:22:34 compute-0 nova_compute[243704]: 2025-12-13 04:22:34.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:35 compute-0 ceph-mon[75071]: pgmap v1549: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 57 KiB/s rd, 29 KiB/s wr, 77 op/s
Dec 13 04:22:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:35.098 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:35.099 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:35.099 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.116 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599740.1150713, 82d113ec-d32a-4dd6-b8f4-bab622ea377f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.116 243708 INFO nova.compute.manager [-] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] VM Stopped (Lifecycle Event)
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.136 243708 DEBUG nova.compute.manager [None req-3ab38489-d688-44b6-878b-12da8178f465 - - - - - -] [instance: 82d113ec-d32a-4dd6-b8f4-bab622ea377f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.345 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599740.3433733, 76079c07-6caa-4f82-8285-1ce2d2f6c0a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.346 243708 INFO nova.compute.manager [-] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] VM Stopped (Lifecycle Event)
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.364 243708 DEBUG nova.compute.manager [None req-27ebe3db-c8ce-45c0-a784-ab072be7cf54 - - - - - -] [instance: 76079c07-6caa-4f82-8285-1ce2d2f6c0a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:35 compute-0 nova_compute[243704]: 2025-12-13 04:22:35.388 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 61 KiB/s rd, 24 KiB/s wr, 82 op/s
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.865 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.866 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.880 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.966 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.967 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.976 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:22:36 compute-0 nova_compute[243704]: 2025-12-13 04:22:36.977 243708 INFO nova.compute.claims [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:22:37 compute-0 ceph-mon[75071]: pgmap v1550: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 61 KiB/s rd, 24 KiB/s wr, 82 op/s
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.081 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:37 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/968470931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.646 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.651 243708 DEBUG nova.compute.provider_tree [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.665 243708 DEBUG nova.scheduler.client.report [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.690 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.691 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.734 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.735 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.753 243708 INFO nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.773 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:22:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 47 op/s
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.863 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.865 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.866 243708 INFO nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Creating image(s)
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.887 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.913 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.944 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:37 compute-0 nova_compute[243704]: 2025-12-13 04:22:37.949 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/968470931' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.020 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.022 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.023 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.023 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.047 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.051 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 6f73a99d-0666-471f-b9c9-482c5570537a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.074 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:38 compute-0 nova_compute[243704]: 2025-12-13 04:22:38.194 243708 DEBUG nova.policy [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '95b4d334bdca4149b6fe3499375d46e6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '75b261e8b1c44ab8b079f57244a812c7', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:22:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e393 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e393 do_prune osdmap full prune enabled
Dec 13 04:22:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e394 e394: 3 total, 3 up, 3 in
Dec 13 04:22:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e394: 3 total, 3 up, 3 in
Dec 13 04:22:39 compute-0 podman[272320]: 2025-12-13 04:22:39.000979495 +0000 UTC m=+0.134615641 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:22:39 compute-0 ceph-mon[75071]: pgmap v1551: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 35 KiB/s rd, 2.2 KiB/s wr, 47 op/s
Dec 13 04:22:39 compute-0 ceph-mon[75071]: osdmap e394: 3 total, 3 up, 3 in
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.135 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 6f73a99d-0666-471f-b9c9-482c5570537a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.200 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] resizing rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.292 243708 DEBUG nova.objects.instance [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.307 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.308 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Ensure instance console log exists: /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.309 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.309 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.309 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:39 compute-0 nova_compute[243704]: 2025-12-13 04:22:39.568 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Successfully created port: 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:22:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 68 op/s
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.417 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:22:40
Dec 13 04:22:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:22:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:22:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.meta', 'backups', 'images', 'default.rgw.control']
Dec 13 04:22:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.836 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Successfully updated port: 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.853 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.854 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquired lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.854 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.937 243708 DEBUG nova.compute.manager [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-changed-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.938 243708 DEBUG nova.compute.manager [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Refreshing instance network info cache due to event network-changed-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:22:40 compute-0 nova_compute[243704]: 2025-12-13 04:22:40.938 243708 DEBUG oslo_concurrency.lockutils [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:22:41 compute-0 ceph-mon[75071]: pgmap v1553: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.4 MiB/s wr, 68 op/s
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.270 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.271 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.271 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.271 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.272 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.273 243708 INFO nova.compute.manager [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Terminating instance
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.274 243708 DEBUG nova.compute.manager [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.286 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:22:41 compute-0 kernel: tap81069290-53 (unregistering): left promiscuous mode
Dec 13 04:22:41 compute-0 NetworkManager[48899]: <info>  [1765599761.3788] device (tap81069290-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:22:41 compute-0 ovn_controller[145204]: 2025-12-13T04:22:41Z|00209|binding|INFO|Releasing lport 81069290-53e4-4b72-85b9-c14104457590 from this chassis (sb_readonly=0)
Dec 13 04:22:41 compute-0 ovn_controller[145204]: 2025-12-13T04:22:41Z|00210|binding|INFO|Setting lport 81069290-53e4-4b72-85b9-c14104457590 down in Southbound
Dec 13 04:22:41 compute-0 ovn_controller[145204]: 2025-12-13T04:22:41Z|00211|binding|INFO|Removing iface tap81069290-53 ovn-installed in OVS
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.387 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.391 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.396 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:a1:58 10.100.0.12'], port_security=['fa:16:3e:3c:a1:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a1553556-dd0c-4271-b7de-2c5739155591', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '4d091687ce954cb1b60b66f0e250a2a6', 'neutron:revision_number': '4', 'neutron:security_group_ids': '01d1227f-16cd-4990-95d6-fa037ef961a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.248'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5a6f2fb8-2f33-4c9d-9392-7c4537f332df, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=81069290-53e4-4b72-85b9-c14104457590) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.397 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 81069290-53e4-4b72-85b9-c14104457590 in datapath 01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 unbound from our chassis
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.399 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.401 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[987fc8e2-3095-4e75-90f6-e0103f3241d1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.401 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 namespace which is not needed anymore
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.409 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Deactivated successfully.
Dec 13 04:22:41 compute-0 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000016.scope: Consumed 15.940s CPU time.
Dec 13 04:22:41 compute-0 systemd-machined[206767]: Machine qemu-22-instance-00000016 terminated.
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.494 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.504 243708 INFO nova.virt.libvirt.driver [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Instance destroyed successfully.
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.505 243708 DEBUG nova.objects.instance [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lazy-loading 'resources' on Instance uuid a1553556-dd0c-4271-b7de-2c5739155591 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.517 243708 DEBUG nova.virt.libvirt.vif [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:21:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-instance-1970772568',display_name='tempest-instance-1970772568',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instance-1970772568',id=22,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCiGEwyTdooaKbAVWN0g8c6leJ40yeXokRq3QuvDXZrKu8VH+DLR9rsuVErwL3KQWIu2edoerqCIXrzmh+jrhKzrYWQVf0rbAXR5C9EAL56ICtpX4jAUqZo1fgPnzL6n5g==',key_name='tempest-keypair-1140258528',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:22:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='4d091687ce954cb1b60b66f0e250a2a6',ramdisk_id='',reservation_id='r-0ext81cq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesBackupsTest-500075976',owner_user_name='tempest-VolumesBackupsTest-500075976-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:22:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='550f7240611f4009aa1ef70200760184',uuid=a1553556-dd0c-4271-b7de-2c5739155591,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.518 243708 DEBUG nova.network.os_vif_util [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converting VIF {"id": "81069290-53e4-4b72-85b9-c14104457590", "address": "fa:16:3e:3c:a1:58", "network": {"id": "01e9047f-f5cf-4bd5-a58c-1b5ed80cec97", "bridge": "br-int", "label": "tempest-VolumesBackupsTest-1306235672-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "4d091687ce954cb1b60b66f0e250a2a6", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap81069290-53", "ovs_interfaceid": "81069290-53e4-4b72-85b9-c14104457590", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.519 243708 DEBUG nova.network.os_vif_util [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.521 243708 DEBUG os_vif [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.523 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.524 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap81069290-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.525 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.527 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.530 243708 INFO os_vif [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:3c:a1:58,bridge_name='br-int',has_traffic_filtering=True,id=81069290-53e4-4b72-85b9-c14104457590,network=Network(01e9047f-f5cf-4bd5-a58c-1b5ed80cec97),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap81069290-53')
Dec 13 04:22:41 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [NOTICE]   (271228) : haproxy version is 2.8.14-c23fe91
Dec 13 04:22:41 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [NOTICE]   (271228) : path to executable is /usr/sbin/haproxy
Dec 13 04:22:41 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [WARNING]  (271228) : Exiting Master process...
Dec 13 04:22:41 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [ALERT]    (271228) : Current worker (271230) exited with code 143 (Terminated)
Dec 13 04:22:41 compute-0 neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97[271224]: [WARNING]  (271228) : All workers exited. Exiting... (0)
Dec 13 04:22:41 compute-0 systemd[1]: libpod-dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3.scope: Deactivated successfully.
Dec 13 04:22:41 compute-0 podman[272444]: 2025-12-13 04:22:41.551900351 +0000 UTC m=+0.052048531 container died dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3-userdata-shm.mount: Deactivated successfully.
Dec 13 04:22:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2999b426e88c862b3f4f921219a63a3592f688e4cb25a253d19ecf41a92eeec-merged.mount: Deactivated successfully.
Dec 13 04:22:41 compute-0 podman[272444]: 2025-12-13 04:22:41.600164571 +0000 UTC m=+0.100312741 container cleanup dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:22:41 compute-0 systemd[1]: libpod-conmon-dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3.scope: Deactivated successfully.
Dec 13 04:22:41 compute-0 podman[272500]: 2025-12-13 04:22:41.670058886 +0000 UTC m=+0.044345654 container remove dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.676 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[81b4ff20-f5eb-4a5d-9672-93071831ab9f]: (4, ('Sat Dec 13 04:22:41 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 (dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3)\ndd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3\nSat Dec 13 04:22:41 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 (dd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3)\ndd3b147d993b4d30a7d1edd334d5c876ba44279ae8d0cc7161df9e5118fd63d3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.678 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[07153c57-e290-453e-90de-3d6ba11051d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.679 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap01e9047f-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 kernel: tap01e9047f-f0: left promiscuous mode
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.696 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.698 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c7039542-f30f-4696-a767-ec08eba5c0a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.714 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6b5faefe-b152-4c2f-bf2c-ee3a5061d6d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.715 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[77441823-03f1-44af-a4c6-67e07617afe5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.732 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0c559296-f923-41e3-97fb-4d1d84c04946]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 446004, 'reachable_time': 21746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272516, 'error': None, 'target': 'ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.734 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-01e9047f-f5cf-4bd5-a58c-1b5ed80cec97 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:22:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:41.734 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[1933e61f-1e6e-441f-86f6-9d6b478a76bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:41 compute-0 systemd[1]: run-netns-ovnmeta\x2d01e9047f\x2df5cf\x2d4bd5\x2da58c\x2d1b5ed80cec97.mount: Deactivated successfully.
Dec 13 04:22:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 55 op/s
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.817 243708 INFO nova.virt.libvirt.driver [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Deleting instance files /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591_del
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.817 243708 INFO nova.virt.libvirt.driver [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Deletion of /var/lib/nova/instances/a1553556-dd0c-4271-b7de-2c5739155591_del complete
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.870 243708 INFO nova.compute.manager [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Took 0.60 seconds to destroy the instance on the hypervisor.
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.870 243708 DEBUG oslo.service.loopingcall [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.871 243708 DEBUG nova.compute.manager [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:22:41 compute-0 nova_compute[243704]: 2025-12-13 04:22:41.871 243708 DEBUG nova.network.neutron [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:22:42 compute-0 nova_compute[243704]: 2025-12-13 04:22:42.237 243708 DEBUG nova.network.neutron [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updating instance_info_cache with network_info: [{"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:22:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.018 243708 DEBUG nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-unplugged-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.019 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.019 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.020 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.020 243708 DEBUG nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] No waiting events found dispatching network-vif-unplugged-81069290-53e4-4b72-85b9-c14104457590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.020 243708 DEBUG nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-unplugged-81069290-53e4-4b72-85b9-c14104457590 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.021 243708 DEBUG nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.021 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "a1553556-dd0c-4271-b7de-2c5739155591-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.021 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.022 243708 DEBUG oslo_concurrency.lockutils [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.022 243708 DEBUG nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] No waiting events found dispatching network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.023 243708 WARNING nova.compute.manager [req-52e3337a-8807-48f4-81a9-86d4398ff82a req-35a04187-48aa-41ce-8168-552cbaf10eba 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received unexpected event network-vif-plugged-81069290-53e4-4b72-85b9-c14104457590 for instance with vm_state active and task_state deleting.
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.063 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:43 compute-0 ceph-mon[75071]: pgmap v1554: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 55 op/s
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.335 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Releasing lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.335 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Instance network_info: |[{"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.336 243708 DEBUG oslo_concurrency.lockutils [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.337 243708 DEBUG nova.network.neutron [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Refreshing network info cache for port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.340 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Start _get_guest_xml network_info=[{"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.347 243708 WARNING nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.354 243708 DEBUG nova.virt.libvirt.host [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.355 243708 DEBUG nova.virt.libvirt.host [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.362 243708 DEBUG nova.virt.libvirt.host [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.362 243708 DEBUG nova.virt.libvirt.host [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.363 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.363 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.364 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.364 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.364 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.365 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.365 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.365 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.365 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.366 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.366 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.366 243708 DEBUG nova.virt.hardware [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:22:43 compute-0 nova_compute[243704]: 2025-12-13 04:22:43.370 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 13 04:22:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/565311084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.029 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.660s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.063 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.067 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/565311084' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/444447549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.696 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.629s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.698 243708 DEBUG nova.virt.libvirt.vif [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1210779781',display_name='tempest-VolumesSnapshotTestJSON-instance-1210779781',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1210779781',id=23,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGdWShGngG1T60fxtDCyZF/4g+yzZ0OcQNVbjGT5BNenMaU0rt/YT14rk4/InRXk6hLTctywg1ltUyOI+mrNuVvkteM3YlWW0l7NqxX4eJUcgQ1jtCfb6tS+4wG7D8mWrg==',key_name='tempest-keypair-110734835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-qykwdpy6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:22:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=6f73a99d-0666-471f-b9c9-482c5570537a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.699 243708 DEBUG nova.network.os_vif_util [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.700 243708 DEBUG nova.network.os_vif_util [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.702 243708 DEBUG nova.objects.instance [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.714 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <uuid>6f73a99d-0666-471f-b9c9-482c5570537a</uuid>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <name>instance-00000017</name>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:name>tempest-VolumesSnapshotTestJSON-instance-1210779781</nova:name>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:22:43</nova:creationTime>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:user uuid="95b4d334bdca4149b6fe3499375d46e6">tempest-VolumesSnapshotTestJSON-524347860-project-member</nova:user>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:project uuid="75b261e8b1c44ab8b079f57244a812c7">tempest-VolumesSnapshotTestJSON-524347860</nova:project>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <nova:port uuid="0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5">
Dec 13 04:22:44 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <system>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="serial">6f73a99d-0666-471f-b9c9-482c5570537a</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="uuid">6f73a99d-0666-471f-b9c9-482c5570537a</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </system>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <os>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </os>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <features>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </features>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/6f73a99d-0666-471f-b9c9-482c5570537a_disk">
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </source>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/6f73a99d-0666-471f-b9c9-482c5570537a_disk.config">
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </source>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:22:44 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:28:a9:96"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <target dev="tap0d5a15fc-d4"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/console.log" append="off"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <video>
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </video>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:22:44 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:22:44 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:22:44 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:22:44 compute-0 nova_compute[243704]: </domain>
Dec 13 04:22:44 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.716 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Preparing to wait for external event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.716 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.716 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.717 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.717 243708 DEBUG nova.virt.libvirt.vif [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1210779781',display_name='tempest-VolumesSnapshotTestJSON-instance-1210779781',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1210779781',id=23,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGdWShGngG1T60fxtDCyZF/4g+yzZ0OcQNVbjGT5BNenMaU0rt/YT14rk4/InRXk6hLTctywg1ltUyOI+mrNuVvkteM3YlWW0l7NqxX4eJUcgQ1jtCfb6tS+4wG7D8mWrg==',key_name='tempest-keypair-110734835',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-qykwdpy6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:22:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=6f73a99d-0666-471f-b9c9-482c5570537a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.718 243708 DEBUG nova.network.os_vif_util [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.718 243708 DEBUG nova.network.os_vif_util [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.719 243708 DEBUG os_vif [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.720 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.720 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.721 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.725 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.725 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0d5a15fc-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.725 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0d5a15fc-d4, col_values=(('external_ids', {'iface-id': '0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:a9:96', 'vm-uuid': '6f73a99d-0666-471f-b9c9-482c5570537a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.727 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:44 compute-0 NetworkManager[48899]: <info>  [1765599764.7286] manager: (tap0d5a15fc-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.731 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.734 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:44 compute-0 nova_compute[243704]: 2025-12-13 04:22:44.735 243708 INFO os_vif [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4')
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.148 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.148 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.149 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No VIF found with MAC fa:16:3e:28:a9:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.149 243708 INFO nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Using config drive
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.173 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.275 243708 DEBUG nova.network.neutron [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.293 243708 INFO nova.compute.manager [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Took 3.42 seconds to deallocate network for instance.
Dec 13 04:22:45 compute-0 ceph-mon[75071]: pgmap v1555: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 68 op/s
Dec 13 04:22:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/444447549' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.468 243708 DEBUG nova.compute.manager [req-f9bd450b-b08b-488b-a140-1e9e364994bc req-daabd642-55c8-4137-8852-ea5b92d77567 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Received event network-vif-deleted-81069290-53e4-4b72-85b9-c14104457590 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.659 243708 INFO nova.compute.manager [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Took 0.37 seconds to detach 1 volumes for instance.
Dec 13 04:22:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:22:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3966681998' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:22:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3966681998' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.714 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.714 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.719 243708 INFO nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Creating config drive at /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.726 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpheed9pux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.749 243708 DEBUG nova.network.neutron [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updated VIF entry in instance network info cache for port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.750 243708 DEBUG nova.network.neutron [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updating instance_info_cache with network_info: [{"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.771 243708 DEBUG oslo_concurrency.lockutils [req-5b59f042-eefa-4422-8942-c1e39b97ca2a req-cfecb448-6543-41d5-a1a7-b7ab012d41b8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:22:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.820 243708 DEBUG oslo_concurrency.processutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.854 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpheed9pux" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.876 243708 DEBUG nova.storage.rbd_utils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] rbd image 6f73a99d-0666-471f-b9c9-482c5570537a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:22:45 compute-0 nova_compute[243704]: 2025-12-13 04:22:45.879 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config 6f73a99d-0666-471f-b9c9-482c5570537a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.028 243708 DEBUG oslo_concurrency.processutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config 6f73a99d-0666-471f-b9c9-482c5570537a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.029 243708 INFO nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Deleting local config drive /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a/disk.config because it was imported into RBD.
Dec 13 04:22:46 compute-0 kernel: tap0d5a15fc-d4: entered promiscuous mode
Dec 13 04:22:46 compute-0 ovn_controller[145204]: 2025-12-13T04:22:46Z|00212|binding|INFO|Claiming lport 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 for this chassis.
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.084 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 ovn_controller[145204]: 2025-12-13T04:22:46Z|00213|binding|INFO|0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5: Claiming fa:16:3e:28:a9:96 10.100.0.7
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.0876] manager: (tap0d5a15fc-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.093 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:a9:96 10.100.0.7'], port_security=['fa:16:3e:28:a9:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f73a99d-0666-471f-b9c9-482c5570537a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75b261e8b1c44ab8b079f57244a812c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '73593bbb-4e2f-451d-b5a6-72524bf63628', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d2e886-04ee-44a8-8e42-fd2f33ff96d6, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.094 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 in datapath 0f93b436-b78f-4a08-8363-5ff70f1f85b9 bound to our chassis
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.095 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.109 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[66672105-5221-4a1a-9258-3b44de7ba76a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.110 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0f93b436-b1 in ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:22:46 compute-0 ovn_controller[145204]: 2025-12-13T04:22:46Z|00214|binding|INFO|Setting lport 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 ovn-installed in OVS
Dec 13 04:22:46 compute-0 ovn_controller[145204]: 2025-12-13T04:22:46Z|00215|binding|INFO|Setting lport 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 up in Southbound
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.115 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.115 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0f93b436-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.116 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5efe2233-7ce7-4223-ba72-666d31f42f14]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.117 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.117 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[16ce3179-f334-4ef5-9bb2-2184fd0db28c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.134 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[54c34fd5-38f5-41bc-8f5f-4a91730653c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 systemd-machined[206767]: New machine qemu-23-instance-00000017.
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.150 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fd11f43d-a937-4c56-877c-b72b683c206d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 systemd[1]: Started Virtual Machine qemu-23-instance-00000017.
Dec 13 04:22:46 compute-0 systemd-udevd[272678]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.185 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[3c24a97b-6db0-49ff-8528-b9bdb970db2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 systemd-udevd[272680]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.191 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0edd5daf-67b0-4c18-8c46-f505d916ad83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.1986] manager: (tap0f93b436-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.1997] device (tap0d5a15fc-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.2037] device (tap0d5a15fc-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.230 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[8d7d6d85-7821-45b3-83a5-1cee0b1242d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.234 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[986cfd87-5ee2-4c12-b6e9-6eef7cd44fbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.2595] device (tap0f93b436-b0): carrier: link connected
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.264 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[888afc32-412b-4c67-932d-573fca0ea006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.279 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b54497-13de-4478-bd88-a587d2193795]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f93b436-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:a1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450493, 'reachable_time': 27719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 272706, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.297 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d1485411-019c-4bdd-822a-c286993e38e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:a1e4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 450493, 'tstamp': 450493}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 272707, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.315 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ba5842c2-5f85-47e6-8c10-ce807125e96d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0f93b436-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:a1:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 73], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450493, 'reachable_time': 27719, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 272708, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.328 243708 DEBUG nova.compute.manager [req-c29c80ad-db1c-4359-a5da-70cc2701a903 req-a8463694-1623-40b9-9414-63f1d4fdd08f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.328 243708 DEBUG oslo_concurrency.lockutils [req-c29c80ad-db1c-4359-a5da-70cc2701a903 req-a8463694-1623-40b9-9414-63f1d4fdd08f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.328 243708 DEBUG oslo_concurrency.lockutils [req-c29c80ad-db1c-4359-a5da-70cc2701a903 req-a8463694-1623-40b9-9414-63f1d4fdd08f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.329 243708 DEBUG oslo_concurrency.lockutils [req-c29c80ad-db1c-4359-a5da-70cc2701a903 req-a8463694-1623-40b9-9414-63f1d4fdd08f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.329 243708 DEBUG nova.compute.manager [req-c29c80ad-db1c-4359-a5da-70cc2701a903 req-a8463694-1623-40b9-9414-63f1d4fdd08f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Processing event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.345 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4eb33c27-42a8-4415-bade-afcac251ddb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:22:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3482663357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.400 243708 DEBUG oslo_concurrency.processutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.581s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.405 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[130d4d88-37fc-4374-8d03-c87494e4cb8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.406 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f93b436-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.406 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.407 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f93b436-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.407 243708 DEBUG nova.compute.provider_tree [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:22:46 compute-0 NetworkManager[48899]: <info>  [1765599766.4095] manager: (tap0f93b436-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.409 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 kernel: tap0f93b436-b0: entered promiscuous mode
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.413 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0f93b436-b0, col_values=(('external_ids', {'iface-id': '33b3b6f8-467a-4e08-8d35-798a9ec0adcc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:22:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3966681998' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:22:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3966681998' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:22:46 compute-0 ceph-mon[75071]: pgmap v1556: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 04:22:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3482663357' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:22:46 compute-0 ovn_controller[145204]: 2025-12-13T04:22:46Z|00216|binding|INFO|Releasing lport 33b3b6f8-467a-4e08-8d35-798a9ec0adcc from this chassis (sb_readonly=0)
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.416 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.417 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.427 243708 DEBUG nova.scheduler.client.report [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.428 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c4842ef4-95a6-43f3-a209-db55540a991c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.430 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.431 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/0f93b436-b78f-4a08-8363-5ff70f1f85b9.pid.haproxy
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 0f93b436-b78f-4a08-8363-5ff70f1f85b9
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:22:46 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:22:46.433 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'env', 'PROCESS_TAG=haproxy-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0f93b436-b78f-4a08-8363-5ff70f1f85b9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.454 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.509 243708 INFO nova.scheduler.client.report [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Deleted allocations for instance a1553556-dd0c-4271-b7de-2c5739155591
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.623 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599766.6235337, 6f73a99d-0666-471f-b9c9-482c5570537a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.624 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] VM Started (Lifecycle Event)
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.626 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.629 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.632 243708 INFO nova.virt.libvirt.driver [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Instance spawned successfully.
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.633 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.644 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.649 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.652 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.652 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.653 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.653 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.654 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.654 243708 DEBUG nova.virt.libvirt.driver [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.674 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.675 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599766.624458, 6f73a99d-0666-471f-b9c9-482c5570537a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.675 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] VM Paused (Lifecycle Event)
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.687 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.690 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599766.628868, 6f73a99d-0666-471f-b9c9-482c5570537a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.690 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] VM Resumed (Lifecycle Event)
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.712 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.715 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:22:46 compute-0 nova_compute[243704]: 2025-12-13 04:22:46.787 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:22:46 compute-0 podman[272784]: 2025-12-13 04:22:46.815623393 +0000 UTC m=+0.020849616 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:22:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:22:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3687677017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 04:22:48 compute-0 nova_compute[243704]: 2025-12-13 04:22:48.069 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e394 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3687677017' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:22:49 compute-0 nova_compute[243704]: 2025-12-13 04:22:49.256 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:22:49 compute-0 nova_compute[243704]: 2025-12-13 04:22:49.474 243708 DEBUG oslo_concurrency.lockutils [None req-c438cd74-7941-45db-86e2-e843fcd0e43d 550f7240611f4009aa1ef70200760184 4d091687ce954cb1b60b66f0e250a2a6 - - default default] Lock "a1553556-dd0c-4271-b7de-2c5739155591" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:49 compute-0 nova_compute[243704]: 2025-12-13 04:22:49.728 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.0 MiB/s wr, 147 op/s
Dec 13 04:22:50 compute-0 podman[272796]: 2025-12-13 04:22:50.000675634 +0000 UTC m=+2.140958932 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Dec 13 04:22:50 compute-0 podman[272784]: 2025-12-13 04:22:50.100672856 +0000 UTC m=+3.305899069 container create dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.107 243708 DEBUG nova.compute.manager [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.109 243708 DEBUG oslo_concurrency.lockutils [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.109 243708 DEBUG oslo_concurrency.lockutils [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.111 243708 DEBUG oslo_concurrency.lockutils [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.111 243708 DEBUG nova.compute.manager [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] No waiting events found dispatching network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.111 243708 WARNING nova.compute.manager [req-5b655725-f0e7-44a7-965b-316d170a355b req-09bbcbc9-82e3-4c96-af6f-b5e5551c3ad1 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received unexpected event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 for instance with vm_state building and task_state spawning.
Dec 13 04:22:50 compute-0 ceph-mon[75071]: pgmap v1557: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 73 op/s
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.121 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Triggering sync for uuid 6f73a99d-0666-471f-b9c9-482c5570537a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 04:22:50 compute-0 nova_compute[243704]: 2025-12-13 04:22:50.123 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:22:50 compute-0 systemd[1]: Started libpod-conmon-dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8.scope.
Dec 13 04:22:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:22:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bb438e18c54d3420a4b6c8b1d1d062819862dd0b0bbd139dc66efd39e278f01/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:22:50 compute-0 podman[272784]: 2025-12-13 04:22:50.405673476 +0000 UTC m=+3.610899729 container init dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec 13 04:22:50 compute-0 podman[272784]: 2025-12-13 04:22:50.41318654 +0000 UTC m=+3.618412743 container start dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 04:22:50 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [NOTICE]   (272825) : New worker (272827) forked
Dec 13 04:22:50 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [NOTICE]   (272825) : Loading success.
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.115 243708 INFO nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Took 13.25 seconds to spawn the instance on the hypervisor.
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.117 243708 DEBUG nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e394 do_prune osdmap full prune enabled
Dec 13 04:22:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e395 e395: 3 total, 3 up, 3 in
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.199 243708 INFO nova.compute.manager [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Took 14.27 seconds to build instance.
Dec 13 04:22:51 compute-0 ceph-mon[75071]: pgmap v1558: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.0 MiB/s wr, 147 op/s
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.252 243708 DEBUG oslo_concurrency.lockutils [None req-508af1a5-2998-43d7-bdeb-e711c8cea875 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.386s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.253 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 1.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.253 243708 INFO nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:22:51 compute-0 nova_compute[243704]: 2025-12-13 04:22:51.253 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:22:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e395: 3 total, 3 up, 3 in
Dec 13 04:22:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 222 KiB/s wr, 155 op/s
Dec 13 04:22:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e395 do_prune osdmap full prune enabled
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003545699567000537 of space, bias 1.0, pg target 0.10637098701001611 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.03446866985161323 of space, bias 1.0, pg target 10.34060095548397 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0003483014126312083 of space, bias 1.0, pg target 0.10100740966305041 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000666559848684798 of space, bias 1.0, pg target 0.1933023561185914 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.015774617012244e-06 of space, bias 4.0, pg target 0.0011782985557342032 quantized to 16 (current 16)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:22:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Dec 13 04:22:52 compute-0 ceph-mon[75071]: osdmap e395: 3 total, 3 up, 3 in
Dec 13 04:22:52 compute-0 podman[272836]: 2025-12-13 04:22:52.926173788 +0000 UTC m=+0.071670575 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:22:53 compute-0 nova_compute[243704]: 2025-12-13 04:22:53.085 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e396 e396: 3 total, 3 up, 3 in
Dec 13 04:22:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e396: 3 total, 3 up, 3 in
Dec 13 04:22:53 compute-0 ceph-mon[75071]: pgmap v1560: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.3 MiB/s rd, 222 KiB/s wr, 155 op/s
Dec 13 04:22:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1562: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.1 MiB/s wr, 160 op/s
Dec 13 04:22:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:54 compute-0 nova_compute[243704]: 2025-12-13 04:22:54.731 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1563: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.2 MiB/s wr, 196 op/s
Dec 13 04:22:56 compute-0 nova_compute[243704]: 2025-12-13 04:22:56.503 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599761.501985, a1553556-dd0c-4271-b7de-2c5739155591 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:22:56 compute-0 nova_compute[243704]: 2025-12-13 04:22:56.504 243708 INFO nova.compute.manager [-] [instance: a1553556-dd0c-4271-b7de-2c5739155591] VM Stopped (Lifecycle Event)
Dec 13 04:22:56 compute-0 nova_compute[243704]: 2025-12-13 04:22:56.520 243708 DEBUG nova.compute.manager [None req-74fae728-1942-483a-99e6-37a9169c1b77 - - - - - -] [instance: a1553556-dd0c-4271-b7de-2c5739155591] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:22:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1564: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 74 op/s
Dec 13 04:22:58 compute-0 nova_compute[243704]: 2025-12-13 04:22:58.086 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e396 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:22:59 compute-0 nova_compute[243704]: 2025-12-13 04:22:59.733 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:22:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1565: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 80 op/s
Dec 13 04:22:59 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.175546169s, txc = 0x55a1f8d8a300, txc bytes = 1334, txc ios = 1, txc cost = 671334, txc onodes = 1, DB updates = 3, DB bytes = 1093, cost max = 113664540 on 2025-12-13T04:19:14.468832+0000, txc max = 100 on 2025-12-13T03:44:45.082459+0000
Dec 13 04:22:59 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.175583839s
Dec 13 04:22:59 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.175583839s
Dec 13 04:23:00 compute-0 ceph-mon[75071]: osdmap e396: 3 total, 3 up, 3 in
Dec 13 04:23:00 compute-0 ceph-mon[75071]: pgmap v1562: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.1 MiB/s wr, 160 op/s
Dec 13 04:23:01 compute-0 nova_compute[243704]: 2025-12-13 04:23:01.722 243708 DEBUG nova.compute.manager [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-changed-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:01 compute-0 nova_compute[243704]: 2025-12-13 04:23:01.722 243708 DEBUG nova.compute.manager [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Refreshing instance network info cache due to event network-changed-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:23:01 compute-0 nova_compute[243704]: 2025-12-13 04:23:01.723 243708 DEBUG oslo_concurrency.lockutils [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:23:01 compute-0 nova_compute[243704]: 2025-12-13 04:23:01.723 243708 DEBUG oslo_concurrency.lockutils [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:23:01 compute-0 nova_compute[243704]: 2025-12-13 04:23:01.723 243708 DEBUG nova.network.neutron [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Refreshing network info cache for port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:23:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1566: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 68 op/s
Dec 13 04:23:02 compute-0 ceph-mon[75071]: pgmap v1563: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 6.1 MiB/s rd, 3.2 MiB/s wr, 196 op/s
Dec 13 04:23:02 compute-0 ceph-mon[75071]: pgmap v1564: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.2 MiB/s rd, 3.2 MiB/s wr, 74 op/s
Dec 13 04:23:02 compute-0 ceph-mon[75071]: pgmap v1565: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.0 MiB/s wr, 80 op/s
Dec 13 04:23:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e396 do_prune osdmap full prune enabled
Dec 13 04:23:03 compute-0 ceph-mon[75071]: pgmap v1566: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 58 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.5 MiB/s wr, 68 op/s
Dec 13 04:23:03 compute-0 nova_compute[243704]: 2025-12-13 04:23:03.088 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e397 e397: 3 total, 3 up, 3 in
Dec 13 04:23:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e397: 3 total, 3 up, 3 in
Dec 13 04:23:04 compute-0 nova_compute[243704]: 2025-12-13 04:23:04.132 243708 DEBUG nova.network.neutron [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updated VIF entry in instance network info cache for port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:23:04 compute-0 nova_compute[243704]: 2025-12-13 04:23:04.132 243708 DEBUG nova.network.neutron [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updating instance_info_cache with network_info: [{"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1568: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 76 op/s
Dec 13 04:23:04 compute-0 ovn_controller[145204]: 2025-12-13T04:23:04Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:a9:96 10.100.0.7
Dec 13 04:23:04 compute-0 ovn_controller[145204]: 2025-12-13T04:23:04Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:a9:96 10.100.0.7
Dec 13 04:23:04 compute-0 nova_compute[243704]: 2025-12-13 04:23:04.737 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e397 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:04 compute-0 ceph-mon[75071]: osdmap e397: 3 total, 3 up, 3 in
Dec 13 04:23:05 compute-0 nova_compute[243704]: 2025-12-13 04:23:05.098 243708 DEBUG oslo_concurrency.lockutils [req-2a2fd675-ff9b-49ec-b62f-a9833feac8ef req-66e27b2f-5799-4d80-8751-06e7496b5298 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-6f73a99d-0666-471f-b9c9-482c5570537a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:23:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1569: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 04:23:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [WRN] : Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 04:23:06 compute-0 ceph-mon[75071]: pgmap v1568: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.6 MiB/s wr, 76 op/s
Dec 13 04:23:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1570: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 04:23:08 compute-0 nova_compute[243704]: 2025-12-13 04:23:08.107 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:08 compute-0 ceph-mon[75071]: pgmap v1569: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 04:23:08 compute-0 ceph-mon[75071]: Health check failed: 1 OSD(s) experiencing slow operations in BlueStore (BLUESTORE_SLOW_OP_ALERT)
Dec 13 04:23:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e397 do_prune osdmap full prune enabled
Dec 13 04:23:09 compute-0 nova_compute[243704]: 2025-12-13 04:23:09.740 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e398 e398: 3 total, 3 up, 3 in
Dec 13 04:23:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e398: 3 total, 3 up, 3 in
Dec 13 04:23:09 compute-0 ceph-mon[75071]: pgmap v1570: 305 pgs: 305 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 403 KiB/s rd, 2.6 MiB/s wr, 94 op/s
Dec 13 04:23:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1572: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 511 KiB/s rd, 17 MiB/s wr, 181 op/s
Dec 13 04:23:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e398 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e398 do_prune osdmap full prune enabled
Dec 13 04:23:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e399 e399: 3 total, 3 up, 3 in
Dec 13 04:23:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e399: 3 total, 3 up, 3 in
Dec 13 04:23:09 compute-0 podman[272856]: 2025-12-13 04:23:09.964063512 +0000 UTC m=+0.111526965 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:23:11 compute-0 ceph-mon[75071]: osdmap e398: 3 total, 3 up, 3 in
Dec 13 04:23:11 compute-0 ceph-mon[75071]: pgmap v1572: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 511 KiB/s rd, 17 MiB/s wr, 181 op/s
Dec 13 04:23:11 compute-0 ceph-mon[75071]: osdmap e399: 3 total, 3 up, 3 in
Dec 13 04:23:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1574: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 418 KiB/s rd, 16 MiB/s wr, 133 op/s
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:12 compute-0 ceph-mon[75071]: pgmap v1574: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 418 KiB/s rd, 16 MiB/s wr, 133 op/s
Dec 13 04:23:13 compute-0 nova_compute[243704]: 2025-12-13 04:23:13.110 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1575: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 51 KiB/s rd, 14 MiB/s wr, 81 op/s
Dec 13 04:23:14 compute-0 nova_compute[243704]: 2025-12-13 04:23:14.741 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e399 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e399 do_prune osdmap full prune enabled
Dec 13 04:23:15 compute-0 ceph-mon[75071]: pgmap v1575: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 51 KiB/s rd, 14 MiB/s wr, 81 op/s
Dec 13 04:23:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e400 e400: 3 total, 3 up, 3 in
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.073 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.074 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e400: 3 total, 3 up, 3 in
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.133 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:23:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:23:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Cumulative writes: 6889 writes, 31K keys, 6889 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 6889 writes, 6889 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1938 writes, 9049 keys, 1938 commit groups, 1.0 writes per commit group, ingest: 11.95 MB, 0.02 MB/s
                                           Interval WAL: 1938 writes, 1938 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    104.4      0.34              0.11        16    0.021       0      0       0.0       0.0
                                             L6      1/0    9.24 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5     96.4     79.5      1.53              0.42        15    0.102     74K   8456       0.0       0.0
                                            Sum      1/0    9.24 MB   0.0      0.1     0.0      0.1       0.2      0.0       0.0   4.5     79.1     84.0      1.87              0.53        31    0.060     74K   8456       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.9     80.7     83.7      0.59              0.22         8    0.074     24K   2638       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0     96.4     79.5      1.53              0.42        15    0.102     74K   8456       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    124.9      0.28              0.11        15    0.019       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 2400.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.034, interval 0.010
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.15 GB write, 0.07 MB/s write, 0.14 GB read, 0.06 MB/s read, 1.9 seconds
                                           Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.6 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 304.00 MB usage: 16.13 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000121 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1025,15.51 MB,5.10319%) FilterBlock(32,215.30 KB,0.0691615%) IndexBlock(32,420.33 KB,0.135025%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.208 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.208 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.215 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.216 243708 INFO nova.compute.claims [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.339 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1577: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 KiB/s wr, 36 op/s
Dec 13 04:23:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3956375986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.854 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.861 243708 DEBUG nova.compute.provider_tree [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.877 243708 DEBUG nova.scheduler.client.report [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.905 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.906 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.951 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.951 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.975 243708 INFO nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:23:15 compute-0 nova_compute[243704]: 2025-12-13 04:23:15.998 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.063 243708 INFO nova.virt.block_device [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Booting with volume 40bce5a4-eae7-4730-9957-885a9d458715 at /dev/vda
Dec 13 04:23:16 compute-0 ceph-mon[75071]: osdmap e400: 3 total, 3 up, 3 in
Dec 13 04:23:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3956375986' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.203 243708 DEBUG nova.policy [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '439e16bdacdd484cbdfe5b2ff762e327', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ad8ea73576b4cf9aad3a876effca617', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.243 243708 DEBUG os_brick.utils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.245 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.255 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.256 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[bddae85d-90d9-4793-961e-e4cdf6b74db5]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.257 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.266 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.266 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[79831fa0-a913-4736-8172-f5db04784d35]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.268 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.278 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.279 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[196956f2-6ed0-42e4-a106-3c4e06c5c27b]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.280 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[af5af811-f869-48c9-8b18-e45e6454ecbf]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.280 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.306 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.310 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.311 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.311 243708 DEBUG os_brick.initiator.connectors.lightos [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.312 243708 DEBUG os_brick.utils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] <== get_connector_properties: return (67ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:23:16 compute-0 nova_compute[243704]: 2025-12-13 04:23:16.312 243708 DEBUG nova.virt.block_device [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updating existing volume attachment record: c30ff0e2-8de5-4ad1-aaac-a9f38c7e3c25 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:23:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:17 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1454752590' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1578: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 17 KiB/s rd, 2.5 KiB/s wr, 27 op/s
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.146 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:18 compute-0 ceph-mon[75071]: pgmap v1577: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 23 KiB/s rd, 3.3 KiB/s wr, 36 op/s
Dec 13 04:23:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1454752590' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.455 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.456 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.490 243708 DEBUG nova.objects.instance [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.511 243708 INFO nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Ignoring supplied device name: /dev/vdb
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.534 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.687 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Successfully created port: 8e70c806-9e71-427e-bba7-1012c1cdd700 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.737 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.824 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.827 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.828 243708 INFO nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Creating image(s)
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.829 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.829 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Ensure instance console log exists: /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.830 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.830 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.830 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.898 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.899 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.965 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.966 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:18 compute-0 nova_compute[243704]: 2025-12-13 04:23:18.966 243708 INFO nova.compute.manager [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Attaching volume 79eb622a-1c5f-491e-a36b-9f26c7b645f6 to /dev/vdb
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.148 243708 DEBUG os_brick.utils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.150 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.164 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.164 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cda56895-d3a6-4234-b6d6-15af7cdcfded]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.166 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.176 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.177 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[9d72424c-ca06-4d50-9470-02372fe19b28]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.179 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.189 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.190 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cba2afc1-4a2f-40a5-bcc7-9e5853830043]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.191 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1aba91aa-a711-4a95-aa27-f73c6f456a04]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.192 243708 DEBUG oslo_concurrency.processutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.227 243708 DEBUG oslo_concurrency.processutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "nvme version" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.232 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.233 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.233 243708 DEBUG os_brick.initiator.connectors.lightos [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.234 243708 DEBUG os_brick.utils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] <== get_connector_properties: return (84ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.235 243708 DEBUG nova.virt.block_device [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updating existing volume attachment record: c69c3dde-08e2-4b4e-8bac-d06a33aa873b _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:23:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3514092233' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:19 compute-0 ceph-mon[75071]: pgmap v1578: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 17 KiB/s rd, 2.5 KiB/s wr, 27 op/s
Dec 13 04:23:19 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3514092233' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163324115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.437 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.523 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.523 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000017 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.703 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.704 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4122MB free_disk=59.94237413443625GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.704 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.704 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.744 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.782 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 6f73a99d-0666-471f-b9c9-482c5570537a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.782 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 37541a77-deda-4940-b361-9e66c7baaf39 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.783 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.783 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:23:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1579: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 33 KiB/s rd, 8.1 KiB/s wr, 47 op/s
Dec 13 04:23:19 compute-0 nova_compute[243704]: 2025-12-13 04:23:19.843 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e400 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e400 do_prune osdmap full prune enabled
Dec 13 04:23:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e401 e401: 3 total, 3 up, 3 in
Dec 13 04:23:19 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e401: 3 total, 3 up, 3 in
Dec 13 04:23:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2646525660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.249 243708 DEBUG nova.objects.instance [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.278 243708 DEBUG nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Attempting to attach volume 79eb622a-1c5f-491e-a36b-9f26c7b645f6 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.281 243708 DEBUG nova.virt.libvirt.guest [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:23:20 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79eb622a-1c5f-491e-a36b-9f26c7b645f6">
Dec 13 04:23:20 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   </source>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:23:20 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:23:20 compute-0 nova_compute[243704]:   <serial>79eb622a-1c5f-491e-a36b-9f26c7b645f6</serial>
Dec 13 04:23:20 compute-0 nova_compute[243704]: </disk>
Dec 13 04:23:20 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.282 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Successfully updated port: 8e70c806-9e71-427e-bba7-1012c1cdd700 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.310 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.311 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquired lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.311 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.374 243708 DEBUG nova.compute.manager [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-changed-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.374 243708 DEBUG nova.compute.manager [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Refreshing instance network info cache due to event network-changed-8e70c806-9e71-427e-bba7-1012c1cdd700. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.375 243708 DEBUG oslo_concurrency.lockutils [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.447 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:23:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/25199510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.492 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.499 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.516 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.567 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:23:20 compute-0 nova_compute[243704]: 2025-12-13 04:23:20.568 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.863s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1163324115' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:20 compute-0 ceph-mon[75071]: osdmap e401: 3 total, 3 up, 3 in
Dec 13 04:23:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2646525660' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:20 compute-0 podman[272979]: 2025-12-13 04:23:20.923131786 +0000 UTC m=+0.062805799 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.171 243708 DEBUG nova.network.neutron [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updating instance_info_cache with network_info: [{"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.284 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Releasing lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.284 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Instance network_info: |[{"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.285 243708 DEBUG oslo_concurrency.lockutils [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.285 243708 DEBUG nova.network.neutron [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Refreshing network info cache for port 8e70c806-9e71-427e-bba7-1012c1cdd700 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.289 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Start _get_guest_xml network_info=[{"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-40bce5a4-eae7-4730-9957-885a9d458715', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '40bce5a4-eae7-4730-9957-885a9d458715', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '37541a77-deda-4940-b361-9e66c7baaf39', 'attached_at': '', 'detached_at': '', 'volume_id': '40bce5a4-eae7-4730-9957-885a9d458715', 'serial': '40bce5a4-eae7-4730-9957-885a9d458715'}, 'disk_bus': 'virtio', 'attachment_id': 'c30ff0e2-8de5-4ad1-aaac-a9f38c7e3c25', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.298 243708 WARNING nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.308 243708 DEBUG nova.virt.libvirt.host [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.310 243708 DEBUG nova.virt.libvirt.host [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.313 243708 DEBUG nova.virt.libvirt.host [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.313 243708 DEBUG nova.virt.libvirt.host [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.314 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.314 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.314 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.315 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.316 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.316 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:23:21 compute-0 nova_compute[243704]: 2025-12-13 04:23:21.316 243708 DEBUG nova.virt.hardware [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:23:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1581: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 38 KiB/s rd, 8.2 KiB/s wr, 51 op/s
Dec 13 04:23:21 compute-0 ceph-mon[75071]: pgmap v1579: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 33 KiB/s rd, 8.1 KiB/s wr, 47 op/s
Dec 13 04:23:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/25199510' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.714 243708 DEBUG nova.storage.rbd_utils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 37541a77-deda-4940-b361-9e66c7baaf39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.720 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.761 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.762 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.786 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.787 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.787 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:22 compute-0 nova_compute[243704]: 2025-12-13 04:23:22.788 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.070 243708 DEBUG nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.071 243708 DEBUG nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.072 243708 DEBUG nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.073 243708 DEBUG nova.virt.libvirt.driver [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] No VIF found with MAC fa:16:3e:28:a9:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:23:23 compute-0 ceph-mon[75071]: pgmap v1581: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 38 KiB/s rd, 8.2 KiB/s wr, 51 op/s
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.150 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.272145) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803272216, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2579, "num_deletes": 521, "total_data_size": 3590988, "memory_usage": 3655968, "flush_reason": "Manual Compaction"}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Dec 13 04:23:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2424958722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.320 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.599s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.335 243708 DEBUG oslo_concurrency.lockutils [None req-8bcde545-981a-407d-a555-b6ac4e3736ab 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 4.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803373905, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3524746, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29249, "largest_seqno": 31826, "table_properties": {"data_size": 3513156, "index_size": 7124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 27176, "raw_average_key_size": 20, "raw_value_size": 3487906, "raw_average_value_size": 2599, "num_data_blocks": 310, "num_entries": 1342, "num_filter_entries": 1342, "num_deletions": 521, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599606, "oldest_key_time": 1765599606, "file_creation_time": 1765599803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 101852 microseconds, and 8765 cpu microseconds.
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.373988) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3524746 bytes OK
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.374018) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.443010) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.443071) EVENT_LOG_v1 {"time_micros": 1765599803443060, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.443096) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3578918, prev total WAL file size 3578918, number of live WAL files 2.
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.444275) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3442KB)], [62(9465KB)]
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803444345, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 13217266, "oldest_snapshot_seqno": -1}
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.494 243708 DEBUG os_brick.encryptors [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Using volume encryption metadata '{'encryption_key_id': '913ca98c-6f70-4945-a6b1-3a0bfe2ac033', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-40bce5a4-eae7-4730-9957-885a9d458715', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '40bce5a4-eae7-4730-9957-885a9d458715', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '37541a77-deda-4940-b361-9e66c7baaf39', 'attached_at': '', 'detached_at': '', 'volume_id': '40bce5a4-eae7-4730-9957-885a9d458715', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.498 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.517 243708 DEBUG barbicanclient.v1.secrets [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.518 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.553 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.554 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.581 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.582 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.615 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.616 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 6198 keys, 11226808 bytes, temperature: kUnknown
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803624991, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 11226808, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11178562, "index_size": 31624, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15557, "raw_key_size": 156000, "raw_average_key_size": 25, "raw_value_size": 11060352, "raw_average_value_size": 1784, "num_data_blocks": 1273, "num_entries": 6198, "num_filter_entries": 6198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599803, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.640 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.641 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.625375) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 11226808 bytes
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.656388) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 73.1 rd, 62.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 9.2 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(6.9) write-amplify(3.2) OK, records in: 7249, records dropped: 1051 output_compression: NoCompression
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.656435) EVENT_LOG_v1 {"time_micros": 1765599803656410, "job": 34, "event": "compaction_finished", "compaction_time_micros": 180840, "compaction_time_cpu_micros": 29915, "output_level": 6, "num_output_files": 1, "total_output_size": 11226808, "num_input_records": 7249, "num_output_records": 6198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803657156, "job": 34, "event": "table_file_deletion", "file_number": 64}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599803658988, "job": 34, "event": "table_file_deletion", "file_number": 62}
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.444168) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.659106) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.659111) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.659113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.659115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:23:23.659117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.671 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.671 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.691 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.692 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.722 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.723 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.762 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.763 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.784 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.785 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.802 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.802 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1582: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 22 KiB/s rd, 6.9 KiB/s wr, 28 op/s
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.823 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.823 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.848 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.849 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.884 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.884 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.906 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.907 243708 INFO barbicanclient.base [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/913ca98c-6f70-4945-a6b1-3a0bfe2ac033
Dec 13 04:23:23 compute-0 podman[273040]: 2025-12-13 04:23:23.909670664 +0000 UTC m=+0.062562223 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.build-date=20251202, config_id=multipathd, org.label-schema.vendor=CentOS)
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.929 243708 DEBUG barbicanclient.client [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:23:23 compute-0 nova_compute[243704]: 2025-12-13 04:23:23.930 243708 DEBUG nova.virt.libvirt.host [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:23:23 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:23:23 compute-0 nova_compute[243704]:     <volume>40bce5a4-eae7-4730-9957-885a9d458715</volume>
Dec 13 04:23:23 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:23:23 compute-0 nova_compute[243704]: </secret>
Dec 13 04:23:23 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.110 243708 DEBUG nova.network.neutron [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updated VIF entry in instance network info cache for port 8e70c806-9e71-427e-bba7-1012c1cdd700. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.111 243708 DEBUG nova.network.neutron [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updating instance_info_cache with network_info: [{"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.126 243708 DEBUG oslo_concurrency.lockutils [req-9ac17c38-5bed-4c5b-b1f7-d4f4bf336b80 req-99e5ff29-e555-4631-825c-58575e07e81d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.746 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e401 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2424958722' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.934 243708 DEBUG nova.virt.libvirt.vif [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1495550951',display_name='tempest-TestEncryptedCinderVolumes-server-1495550951',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1495550951',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-mvvo3fjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:23:16Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=37541a77-deda-4940-b361-9e66c7baaf39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.934 243708 DEBUG nova.network.os_vif_util [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.935 243708 DEBUG nova.network.os_vif_util [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.936 243708 DEBUG nova.objects.instance [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'pci_devices' on Instance uuid 37541a77-deda-4940-b361-9e66c7baaf39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.946 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <uuid>37541a77-deda-4940-b361-9e66c7baaf39</uuid>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <name>instance-00000018</name>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1495550951</nova:name>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:23:21</nova:creationTime>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:user uuid="439e16bdacdd484cbdfe5b2ff762e327">tempest-TestEncryptedCinderVolumes-1691115809-project-member</nova:user>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:project uuid="3ad8ea73576b4cf9aad3a876effca617">tempest-TestEncryptedCinderVolumes-1691115809</nova:project>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <nova:port uuid="8e70c806-9e71-427e-bba7-1012c1cdd700">
Dec 13 04:23:24 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <system>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="serial">37541a77-deda-4940-b361-9e66c7baaf39</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="uuid">37541a77-deda-4940-b361-9e66c7baaf39</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </system>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <os>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </os>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <features>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </features>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/37541a77-deda-4940-b361-9e66c7baaf39_disk.config">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </source>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-40bce5a4-eae7-4730-9957-885a9d458715">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </source>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <serial>40bce5a4-eae7-4730-9957-885a9d458715</serial>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:23:24 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="ec56c00f-3de3-4512-a0fa-a18a604c59c0"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:82:a4:af"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <target dev="tap8e70c806-9e"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/console.log" append="off"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <video>
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </video>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:23:24 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:23:24 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:23:24 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:23:24 compute-0 nova_compute[243704]: </domain>
Dec 13 04:23:24 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.947 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Preparing to wait for external event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.947 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.947 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.947 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.948 243708 DEBUG nova.virt.libvirt.vif [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1495550951',display_name='tempest-TestEncryptedCinderVolumes-server-1495550951',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1495550951',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-mvvo3fjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:23:16Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=37541a77-deda-4940-b361-9e66c7baaf39,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.948 243708 DEBUG nova.network.os_vif_util [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.949 243708 DEBUG nova.network.os_vif_util [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.949 243708 DEBUG os_vif [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.949 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.950 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.950 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.952 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.952 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e70c806-9e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.952 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e70c806-9e, col_values=(('external_ids', {'iface-id': '8e70c806-9e71-427e-bba7-1012c1cdd700', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:a4:af', 'vm-uuid': '37541a77-deda-4940-b361-9e66c7baaf39'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.954 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:24 compute-0 NetworkManager[48899]: <info>  [1765599804.9552] manager: (tap8e70c806-9e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/120)
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.957 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.962 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:24 compute-0 nova_compute[243704]: 2025-12-13 04:23:24.963 243708 INFO os_vif [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e')
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.060 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.060 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.060 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No VIF found with MAC fa:16:3e:82:a4:af, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.061 243708 INFO nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Using config drive
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.084 243708 DEBUG nova.storage.rbd_utils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 37541a77-deda-4940-b361-9e66c7baaf39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.707 243708 INFO nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Creating config drive at /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.718 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbqh86ms2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1583: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 47 KiB/s rd, 14 MiB/s wr, 70 op/s
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.854 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbqh86ms2" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.896 243708 DEBUG nova.storage.rbd_utils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 37541a77-deda-4940-b361-9e66c7baaf39_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:23:25 compute-0 nova_compute[243704]: 2025-12-13 04:23:25.900 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config 37541a77-deda-4940-b361-9e66c7baaf39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e401 do_prune osdmap full prune enabled
Dec 13 04:23:26 compute-0 ceph-mon[75071]: pgmap v1582: 305 pgs: 305 active+clean; 2.4 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 22 KiB/s rd, 6.9 KiB/s wr, 28 op/s
Dec 13 04:23:26 compute-0 sudo[273117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:23:26 compute-0 sudo[273117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:26 compute-0 sudo[273117]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:26 compute-0 sudo[273142]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:23:26 compute-0 sudo[273142]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e402 e402: 3 total, 3 up, 3 in
Dec 13 04:23:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e402: 3 total, 3 up, 3 in
Dec 13 04:23:27 compute-0 sudo[273142]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:23:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:23:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:27 compute-0 sudo[273200]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:23:27 compute-0 sudo[273200]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:27 compute-0 sudo[273200]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1585: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 35 KiB/s rd, 18 MiB/s wr, 57 op/s
Dec 13 04:23:27 compute-0 ceph-mon[75071]: pgmap v1583: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 47 KiB/s rd, 14 MiB/s wr, 70 op/s
Dec 13 04:23:27 compute-0 ceph-mon[75071]: osdmap e402: 3 total, 3 up, 3 in
Dec 13 04:23:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:23:27 compute-0 sudo[273225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:23:27 compute-0 sudo[273225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:27 compute-0 nova_compute[243704]: 2025-12-13 04:23:27.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:23:27 compute-0 nova_compute[243704]: 2025-12-13 04:23:27.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.152 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.089483689 +0000 UTC m=+0.035629130 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.537430896 +0000 UTC m=+0.483576277 container create 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:28 compute-0 systemd[1]: Started libpod-conmon-8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe.scope.
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.744 243708 DEBUG oslo_concurrency.processutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config 37541a77-deda-4940-b361-9e66c7baaf39_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.844s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.746 243708 INFO nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Deleting local config drive /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39/disk.config because it was imported into RBD.
Dec 13 04:23:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.816634876 +0000 UTC m=+0.762780297 container init 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:23:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:23:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:23:28 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:23:28 compute-0 ceph-mon[75071]: pgmap v1585: 305 pgs: 305 active+clean; 2.5 GiB data, 2.7 GiB used, 57 GiB / 60 GiB avail; 35 KiB/s rd, 18 MiB/s wr, 57 op/s
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.831315665 +0000 UTC m=+0.777461046 container start 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:23:28 compute-0 NetworkManager[48899]: <info>  [1765599808.8379] manager: (tap8e70c806-9e): new Tun device (/org/freedesktop/NetworkManager/Devices/121)
Dec 13 04:23:28 compute-0 kernel: tap8e70c806-9e: entered promiscuous mode
Dec 13 04:23:28 compute-0 dreamy_dubinsky[273276]: 167 167
Dec 13 04:23:28 compute-0 systemd[1]: libpod-8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe.scope: Deactivated successfully.
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.844 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:28 compute-0 ovn_controller[145204]: 2025-12-13T04:23:28Z|00217|binding|INFO|Claiming lport 8e70c806-9e71-427e-bba7-1012c1cdd700 for this chassis.
Dec 13 04:23:28 compute-0 ovn_controller[145204]: 2025-12-13T04:23:28Z|00218|binding|INFO|8e70c806-9e71-427e-bba7-1012c1cdd700: Claiming fa:16:3e:82:a4:af 10.100.0.3
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.853 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:a4:af 10.100.0.3'], port_security=['fa:16:3e:82:a4:af 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '37541a77-deda-4940-b361-9e66c7baaf39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b35a742a-b386-4310-84f0-5826a0beab45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=8e70c806-9e71-427e-bba7-1012c1cdd700) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.854 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 8e70c806-9e71-427e-bba7-1012c1cdd700 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 bound to our chassis
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.856 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.865 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:28 compute-0 ovn_controller[145204]: 2025-12-13T04:23:28Z|00219|binding|INFO|Setting lport 8e70c806-9e71-427e-bba7-1012c1cdd700 ovn-installed in OVS
Dec 13 04:23:28 compute-0 ovn_controller[145204]: 2025-12-13T04:23:28Z|00220|binding|INFO|Setting lport 8e70c806-9e71-427e-bba7-1012c1cdd700 up in Southbound
Dec 13 04:23:28 compute-0 nova_compute[243704]: 2025-12-13 04:23:28.867 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.877 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbbdc42-e07c-47ef-84ce-9cb7909a9b2d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.877 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87c0a2c3-51 in ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:23:28 compute-0 systemd-machined[206767]: New machine qemu-24-instance-00000018.
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.881 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87c0a2c3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.881 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7ca4e85a-db5f-4fcc-88ad-3061599fc4ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.883 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[93c2577a-c439-40d5-8a15-ea9a5a5ef1ee]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.89476436 +0000 UTC m=+0.840909791 container attach 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:23:28 compute-0 podman[273260]: 2025-12-13 04:23:28.896715533 +0000 UTC m=+0.842860914 container died 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:23:28 compute-0 systemd[1]: Started Virtual Machine qemu-24-instance-00000018.
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.897 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[463f53cd-e368-4ef5-98f1-1d784b72de17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 systemd-udevd[273307]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:23:28 compute-0 NetworkManager[48899]: <info>  [1765599808.9243] device (tap8e70c806-9e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:23:28 compute-0 NetworkManager[48899]: <info>  [1765599808.9255] device (tap8e70c806-9e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.919 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ddf3c8-37b7-4e46-96af-b55f2f37eb59]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.953 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[501fd44d-832a-44b3-b390-fbb0ac401804]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:28 compute-0 NetworkManager[48899]: <info>  [1765599808.9607] manager: (tap87c0a2c3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/122)
Dec 13 04:23:28 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:28.959 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0ade1c86-2ab5-43da-b72b-181428c2e603]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a3a2c6b7dba6a718e73b930b7edf13f76429334435ef8adad3e873cc227a1cc-merged.mount: Deactivated successfully.
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.011 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[6c064d08-b93b-40dd-abe0-9d574f7ba956]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.017 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[ee9b4c92-4682-482e-86db-32a8df44fd67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 NetworkManager[48899]: <info>  [1765599809.0545] device (tap87c0a2c3-50): carrier: link connected
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.063 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[740aba90-6aca-49f7-aa7f-1728614d5244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.082 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b3abd5bd-7891-4802-afef-d39a0424f9d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454773, 'reachable_time': 37179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273339, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.097 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b16bf337-d1f4-4dc0-859b-926c29e2eda6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:9abe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 454773, 'tstamp': 454773}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273340, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.110 243708 DEBUG nova.compute.manager [req-67a98acd-6f63-4bf1-a9d1-a024204d67ee req-e03b841d-fd23-437b-8be1-00ddb31802be 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.111 243708 DEBUG oslo_concurrency.lockutils [req-67a98acd-6f63-4bf1-a9d1-a024204d67ee req-e03b841d-fd23-437b-8be1-00ddb31802be 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.111 243708 DEBUG oslo_concurrency.lockutils [req-67a98acd-6f63-4bf1-a9d1-a024204d67ee req-e03b841d-fd23-437b-8be1-00ddb31802be 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.111 243708 DEBUG oslo_concurrency.lockutils [req-67a98acd-6f63-4bf1-a9d1-a024204d67ee req-e03b841d-fd23-437b-8be1-00ddb31802be 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.112 243708 DEBUG nova.compute.manager [req-67a98acd-6f63-4bf1-a9d1-a024204d67ee req-e03b841d-fd23-437b-8be1-00ddb31802be 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Processing event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.117 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6917642e-6c77-4686-93a0-ffbb200c37aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 75], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454773, 'reachable_time': 37179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273341, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 podman[273260]: 2025-12-13 04:23:29.118595344 +0000 UTC m=+1.064740765 container remove 8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_dubinsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:29 compute-0 systemd[1]: libpod-conmon-8aaa33c3d2146dc3aa20459f647056d0e9bc33823106686a9ecd1037a59f81fe.scope: Deactivated successfully.
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.152 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6d8004bf-7f42-4418-b078-a074e57d937a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.239 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5bb30b-d21d-4288-b158-4f0650aa1096]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.241 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.241 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.241 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87c0a2c3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.243 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:29 compute-0 NetworkManager[48899]: <info>  [1765599809.2446] manager: (tap87c0a2c3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/123)
Dec 13 04:23:29 compute-0 kernel: tap87c0a2c3-50: entered promiscuous mode
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.246 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.247 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87c0a2c3-50, col_values=(('external_ids', {'iface-id': '4a1239ec-278e-40d8-aa2f-d801913596a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.250 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:29 compute-0 ovn_controller[145204]: 2025-12-13T04:23:29Z|00221|binding|INFO|Releasing lport 4a1239ec-278e-40d8-aa2f-d801913596a6 from this chassis (sb_readonly=0)
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.252 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.252 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.264 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2c86a1-392f-4add-a6b6-20da6cf78f95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:29 compute-0 nova_compute[243704]: 2025-12-13 04:23:29.266 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.267 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:23:29 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:29.268 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'env', 'PROCESS_TAG=haproxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:23:29 compute-0 podman[273373]: 2025-12-13 04:23:29.339372026 +0000 UTC m=+0.050549505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:29 compute-0 podman[273373]: 2025-12-13 04:23:29.445713147 +0000 UTC m=+0.156890646 container create c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:29 compute-0 systemd[1]: Started libpod-conmon-c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4.scope.
Dec 13 04:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 podman[273373]: 2025-12-13 04:23:29.594298926 +0000 UTC m=+0.305476395 container init c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:23:29 compute-0 podman[273373]: 2025-12-13 04:23:29.610726773 +0000 UTC m=+0.321904232 container start c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:23:29 compute-0 podman[273373]: 2025-12-13 04:23:29.684470357 +0000 UTC m=+0.395647816 container attach c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:23:29 compute-0 podman[273433]: 2025-12-13 04:23:29.767166946 +0000 UTC m=+0.103753212 container create 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:29 compute-0 podman[273433]: 2025-12-13 04:23:29.699542548 +0000 UTC m=+0.036128844 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:23:29 compute-0 systemd[1]: Started libpod-conmon-8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607.scope.
Dec 13 04:23:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1586: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 36 KiB/s rd, 26 MiB/s wr, 60 op/s
Dec 13 04:23:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e402 do_prune osdmap full prune enabled
Dec 13 04:23:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e403 e403: 3 total, 3 up, 3 in
Dec 13 04:23:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e403: 3 total, 3 up, 3 in
Dec 13 04:23:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1edfcaa4f81ce7643993d1e46c2186cbe45021535d3994f32326ad81ef33cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e403 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:30 compute-0 nova_compute[243704]: 2025-12-13 04:23:30.149 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:30 compute-0 podman[273433]: 2025-12-13 04:23:30.181722735 +0000 UTC m=+0.518309011 container init 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:23:30 compute-0 podman[273433]: 2025-12-13 04:23:30.187852972 +0000 UTC m=+0.524439228 container start 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:30 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [NOTICE]   (273457) : New worker (273461) forked
Dec 13 04:23:30 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [NOTICE]   (273457) : Loading success.
Dec 13 04:23:30 compute-0 priceless_raman[273409]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:23:30 compute-0 priceless_raman[273409]: --> All data devices are unavailable
Dec 13 04:23:30 compute-0 systemd[1]: libpod-c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4.scope: Deactivated successfully.
Dec 13 04:23:30 compute-0 podman[273373]: 2025-12-13 04:23:30.413903757 +0000 UTC m=+1.125081216 container died c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:23:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b979f70cbb711bfb33154953ef1564a7474a4f97219afe19ddaecd49768e6e16-merged.mount: Deactivated successfully.
Dec 13 04:23:30 compute-0 podman[273373]: 2025-12-13 04:23:30.65902636 +0000 UTC m=+1.370203819 container remove c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=priceless_raman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:23:30 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 04:23:30 compute-0 systemd[1]: libpod-conmon-c9143a964f71130db00141abd6e0b7317179fdccf80c2700db3ece44b3027bd4.scope: Deactivated successfully.
Dec 13 04:23:30 compute-0 sudo[273225]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:30 compute-0 sudo[273492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:23:30 compute-0 sudo[273492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:30 compute-0 sudo[273492]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:30 compute-0 ceph-mon[75071]: pgmap v1586: 305 pgs: 305 active+clean; 2.6 GiB data, 2.8 GiB used, 57 GiB / 60 GiB avail; 36 KiB/s rd, 26 MiB/s wr, 60 op/s
Dec 13 04:23:30 compute-0 ceph-mon[75071]: osdmap e403: 3 total, 3 up, 3 in
Dec 13 04:23:31 compute-0 sudo[273517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:23:31 compute-0 sudo[273517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.192 243708 DEBUG nova.compute.manager [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.192 243708 DEBUG oslo_concurrency.lockutils [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.193 243708 DEBUG oslo_concurrency.lockutils [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.193 243708 DEBUG oslo_concurrency.lockutils [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.193 243708 DEBUG nova.compute.manager [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] No waiting events found dispatching network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:23:31 compute-0 nova_compute[243704]: 2025-12-13 04:23:31.193 243708 WARNING nova.compute.manager [req-9155db42-2e1e-4492-9ffa-c42760bfa56e req-970dc623-6b11-4ce1-b593-2f53c71bad29 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received unexpected event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 for instance with vm_state building and task_state spawning.
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.441214803 +0000 UTC m=+0.114056401 container create 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.359008159 +0000 UTC m=+0.031849747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:31 compute-0 systemd[1]: Started libpod-conmon-7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0.scope.
Dec 13 04:23:31 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.557431443 +0000 UTC m=+0.230273031 container init 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.578017062 +0000 UTC m=+0.250858620 container start 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.589440453 +0000 UTC m=+0.262282011 container attach 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:31 compute-0 sharp_dijkstra[273570]: 167 167
Dec 13 04:23:31 compute-0 podman[273553]: 2025-12-13 04:23:31.593422271 +0000 UTC m=+0.266263829 container died 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:23:31 compute-0 systemd[1]: libpod-7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0.scope: Deactivated successfully.
Dec 13 04:23:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1588: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 91 KiB/s rd, 45 MiB/s wr, 149 op/s
Dec 13 04:23:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e403 do_prune osdmap full prune enabled
Dec 13 04:23:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-8029e25f51abc810c6626d96635c5abc1ffcc5787d96e5ab7cbce0287019f004-merged.mount: Deactivated successfully.
Dec 13 04:23:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e404 e404: 3 total, 3 up, 3 in
Dec 13 04:23:32 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e404: 3 total, 3 up, 3 in
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.199 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599812.1984494, 37541a77-deda-4940-b361-9e66c7baaf39 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.200 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] VM Started (Lifecycle Event)
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.203 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.207 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.210 243708 INFO nova.virt.libvirt.driver [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Instance spawned successfully.
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.210 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.216 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.219 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.228 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.228 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.229 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.229 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.229 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.230 243708 DEBUG nova.virt.libvirt.driver [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.233 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.233 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599812.199612, 37541a77-deda-4940-b361-9e66c7baaf39 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.234 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] VM Paused (Lifecycle Event)
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.253 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.256 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599812.206108, 37541a77-deda-4940-b361-9e66c7baaf39 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.256 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] VM Resumed (Lifecycle Event)
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.268 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.271 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.286 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.549 243708 INFO nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Took 13.72 seconds to spawn the instance on the hypervisor.
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.549 243708 DEBUG nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.643 243708 INFO nova.compute.manager [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Took 17.46 seconds to build instance.
Dec 13 04:23:32 compute-0 nova_compute[243704]: 2025-12-13 04:23:32.664 243708 DEBUG oslo_concurrency.lockutils [None req-2e20216d-879c-4308-9a58-a86f29dd6183 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:32 compute-0 podman[273553]: 2025-12-13 04:23:32.704631188 +0000 UTC m=+1.377472756 container remove 7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_dijkstra, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:23:32 compute-0 systemd[1]: libpod-conmon-7486b6d6be0c00401166bd4375e5dafec0876d5eeeb5c465aa0ac02889b2cfd0.scope: Deactivated successfully.
Dec 13 04:23:32 compute-0 podman[273601]: 2025-12-13 04:23:32.878256468 +0000 UTC m=+0.031659921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:33 compute-0 nova_compute[243704]: 2025-12-13 04:23:33.154 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:33 compute-0 podman[273601]: 2025-12-13 04:23:33.776368113 +0000 UTC m=+0.929771576 container create 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:33 compute-0 ceph-mon[75071]: pgmap v1588: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 91 KiB/s rd, 45 MiB/s wr, 149 op/s
Dec 13 04:23:33 compute-0 ceph-mon[75071]: osdmap e404: 3 total, 3 up, 3 in
Dec 13 04:23:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1590: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 65 KiB/s rd, 31 MiB/s wr, 106 op/s
Dec 13 04:23:34 compute-0 systemd[1]: Started libpod-conmon-520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65.scope.
Dec 13 04:23:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb312cdfe1e79ae179f66e4e7597d726adb35fddedf76a210cfe23157e8a46fc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb312cdfe1e79ae179f66e4e7597d726adb35fddedf76a210cfe23157e8a46fc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb312cdfe1e79ae179f66e4e7597d726adb35fddedf76a210cfe23157e8a46fc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb312cdfe1e79ae179f66e4e7597d726adb35fddedf76a210cfe23157e8a46fc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:34 compute-0 podman[273601]: 2025-12-13 04:23:34.746992528 +0000 UTC m=+1.900395981 container init 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:23:34 compute-0 podman[273601]: 2025-12-13 04:23:34.756457896 +0000 UTC m=+1.909861329 container start 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:23:34 compute-0 podman[273601]: 2025-12-13 04:23:34.760851485 +0000 UTC m=+1.914254948 container attach 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:23:34 compute-0 ceph-mon[75071]: pgmap v1590: 305 pgs: 305 active+clean; 2.7 GiB data, 2.9 GiB used, 57 GiB / 60 GiB avail; 65 KiB/s rd, 31 MiB/s wr, 106 op/s
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]: {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     "0": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "devices": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "/dev/loop3"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             ],
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_name": "ceph_lv0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_size": "21470642176",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "name": "ceph_lv0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "tags": {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_name": "ceph",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.crush_device_class": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.encrypted": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.objectstore": "bluestore",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_id": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.vdo": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.with_tpm": "0"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             },
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "vg_name": "ceph_vg0"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         }
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     ],
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     "1": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "devices": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "/dev/loop4"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             ],
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_name": "ceph_lv1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_size": "21470642176",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "name": "ceph_lv1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "tags": {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_name": "ceph",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.crush_device_class": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.encrypted": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.objectstore": "bluestore",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_id": "1",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.vdo": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.with_tpm": "0"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             },
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "vg_name": "ceph_vg1"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         }
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     ],
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     "2": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "devices": [
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "/dev/loop5"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             ],
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_name": "ceph_lv2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_size": "21470642176",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "name": "ceph_lv2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "tags": {
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.cluster_name": "ceph",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.crush_device_class": "",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.encrypted": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.objectstore": "bluestore",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osd_id": "2",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.vdo": "0",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:                 "ceph.with_tpm": "0"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             },
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "type": "block",
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:             "vg_name": "ceph_vg2"
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:         }
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]:     ]
Dec 13 04:23:35 compute-0 pensive_mirzakhani[273619]: }
Dec 13 04:23:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:35.099 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e404 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:35.101 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:35.102 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:35 compute-0 systemd[1]: libpod-520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65.scope: Deactivated successfully.
Dec 13 04:23:35 compute-0 podman[273601]: 2025-12-13 04:23:35.119617958 +0000 UTC m=+2.273021421 container died 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:23:35 compute-0 nova_compute[243704]: 2025-12-13 04:23:35.194 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb312cdfe1e79ae179f66e4e7597d726adb35fddedf76a210cfe23157e8a46fc-merged.mount: Deactivated successfully.
Dec 13 04:23:35 compute-0 podman[273601]: 2025-12-13 04:23:35.751630929 +0000 UTC m=+2.905034372 container remove 520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_mirzakhani, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:35 compute-0 systemd[1]: libpod-conmon-520405660b25b6e970e4ac8896a6569b19d2e4981de6510887478887617fcb65.scope: Deactivated successfully.
Dec 13 04:23:35 compute-0 sudo[273517]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1591: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 59 MiB/s wr, 286 op/s
Dec 13 04:23:35 compute-0 sudo[273643]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:23:35 compute-0 sudo[273643]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:35 compute-0 sudo[273643]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:35 compute-0 sudo[273668]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:23:35 compute-0 sudo[273668]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e404 do_prune osdmap full prune enabled
Dec 13 04:23:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e405 e405: 3 total, 3 up, 3 in
Dec 13 04:23:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e405: 3 total, 3 up, 3 in
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.280740873 +0000 UTC m=+0.028543917 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.396357235 +0000 UTC m=+0.144160249 container create af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:23:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:36.459 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:23:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:36.462 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:23:36 compute-0 nova_compute[243704]: 2025-12-13 04:23:36.499 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:36 compute-0 systemd[1]: Started libpod-conmon-af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac.scope.
Dec 13 04:23:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.574633042 +0000 UTC m=+0.322436076 container init af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.582586448 +0000 UTC m=+0.330389462 container start af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:36 compute-0 bold_lalande[273720]: 167 167
Dec 13 04:23:36 compute-0 systemd[1]: libpod-af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac.scope: Deactivated successfully.
Dec 13 04:23:36 compute-0 conmon[273720]: conmon af6d2d0584bf28ccaed8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac.scope/container/memory.events
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.627156729 +0000 UTC m=+0.374959743 container attach af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.627486909 +0000 UTC m=+0.375289923 container died af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:23:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-19bbbc21320692d29ad40a32da3145eb8778b994154a0b3118efd0c45fb98a59-merged.mount: Deactivated successfully.
Dec 13 04:23:36 compute-0 podman[273704]: 2025-12-13 04:23:36.782602905 +0000 UTC m=+0.530405929 container remove af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_lalande, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:23:36 compute-0 systemd[1]: libpod-conmon-af6d2d0584bf28ccaed86a4c87408dcf8dc4c7423a5d7da296e51dc21b3af6ac.scope: Deactivated successfully.
Dec 13 04:23:37 compute-0 podman[273745]: 2025-12-13 04:23:37.053312095 +0000 UTC m=+0.091750206 container create e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:23:37 compute-0 podman[273745]: 2025-12-13 04:23:37.013597814 +0000 UTC m=+0.052035905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:23:37 compute-0 systemd[1]: Started libpod-conmon-e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c.scope.
Dec 13 04:23:37 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95e4893084c39a59e7fc9ae1b252f2d6c18b328321dc4938f014c3b081a7f89/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95e4893084c39a59e7fc9ae1b252f2d6c18b328321dc4938f014c3b081a7f89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95e4893084c39a59e7fc9ae1b252f2d6c18b328321dc4938f014c3b081a7f89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:37 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a95e4893084c39a59e7fc9ae1b252f2d6c18b328321dc4938f014c3b081a7f89/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:23:37 compute-0 podman[273745]: 2025-12-13 04:23:37.229138844 +0000 UTC m=+0.267576945 container init e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:23:37 compute-0 podman[273745]: 2025-12-13 04:23:37.237936624 +0000 UTC m=+0.276374695 container start e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:23:37 compute-0 podman[273745]: 2025-12-13 04:23:37.254882014 +0000 UTC m=+0.293320085 container attach e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:23:37 compute-0 ceph-mon[75071]: pgmap v1591: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 59 MiB/s wr, 286 op/s
Dec 13 04:23:37 compute-0 ceph-mon[75071]: osdmap e405: 3 total, 3 up, 3 in
Dec 13 04:23:37 compute-0 nova_compute[243704]: 2025-12-13 04:23:37.500 243708 DEBUG nova.compute.manager [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-changed-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:37 compute-0 nova_compute[243704]: 2025-12-13 04:23:37.501 243708 DEBUG nova.compute.manager [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Refreshing instance network info cache due to event network-changed-8e70c806-9e71-427e-bba7-1012c1cdd700. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:23:37 compute-0 nova_compute[243704]: 2025-12-13 04:23:37.502 243708 DEBUG oslo_concurrency.lockutils [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:23:37 compute-0 nova_compute[243704]: 2025-12-13 04:23:37.502 243708 DEBUG oslo_concurrency.lockutils [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:23:37 compute-0 nova_compute[243704]: 2025-12-13 04:23:37.503 243708 DEBUG nova.network.neutron [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Refreshing network info cache for port 8e70c806-9e71-427e-bba7-1012c1cdd700 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:23:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1593: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 32 MiB/s wr, 194 op/s
Dec 13 04:23:38 compute-0 lvm[273839]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:23:38 compute-0 lvm[273839]: VG ceph_vg0 finished
Dec 13 04:23:38 compute-0 lvm[273840]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:23:38 compute-0 lvm[273840]: VG ceph_vg1 finished
Dec 13 04:23:38 compute-0 lvm[273842]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:23:38 compute-0 lvm[273842]: VG ceph_vg2 finished
Dec 13 04:23:38 compute-0 infallible_vaughan[273761]: {}
Dec 13 04:23:38 compute-0 systemd[1]: libpod-e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c.scope: Deactivated successfully.
Dec 13 04:23:38 compute-0 systemd[1]: libpod-e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c.scope: Consumed 1.379s CPU time.
Dec 13 04:23:38 compute-0 podman[273745]: 2025-12-13 04:23:38.141127316 +0000 UTC m=+1.179565397 container died e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:23:38 compute-0 nova_compute[243704]: 2025-12-13 04:23:38.156 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-a95e4893084c39a59e7fc9ae1b252f2d6c18b328321dc4938f014c3b081a7f89-merged.mount: Deactivated successfully.
Dec 13 04:23:38 compute-0 podman[273745]: 2025-12-13 04:23:38.22515202 +0000 UTC m=+1.263590101 container remove e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_vaughan, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:23:38 compute-0 systemd[1]: libpod-conmon-e1bec7a68d800d8458cd31bd387e62c548e58db4f884d6b213a17e7d6402e78c.scope: Deactivated successfully.
Dec 13 04:23:38 compute-0 sudo[273668]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:23:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e405 do_prune osdmap full prune enabled
Dec 13 04:23:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:23:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e406 e406: 3 total, 3 up, 3 in
Dec 13 04:23:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e406: 3 total, 3 up, 3 in
Dec 13 04:23:38 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:38 compute-0 sudo[273858]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:23:38 compute-0 nova_compute[243704]: 2025-12-13 04:23:38.619 243708 DEBUG nova.network.neutron [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updated VIF entry in instance network info cache for port 8e70c806-9e71-427e-bba7-1012c1cdd700. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:23:38 compute-0 nova_compute[243704]: 2025-12-13 04:23:38.620 243708 DEBUG nova.network.neutron [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updating instance_info_cache with network_info: [{"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:38 compute-0 sudo[273858]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:23:38 compute-0 sudo[273858]: pam_unix(sudo:session): session closed for user root
Dec 13 04:23:38 compute-0 nova_compute[243704]: 2025-12-13 04:23:38.665 243708 DEBUG oslo_concurrency.lockutils [req-015aad43-38c6-47f5-b34d-07badef2829f req-7ca551b1-7198-41b1-afce-e18bd1495abf 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-37541a77-deda-4940-b361-9e66c7baaf39" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:23:39 compute-0 ceph-mon[75071]: pgmap v1593: 305 pgs: 2 active+clean+snaptrim, 2 active+clean+snaptrim_wait, 301 active+clean; 3.0 GiB data, 3.2 GiB used, 57 GiB / 60 GiB avail; 3.0 MiB/s rd, 32 MiB/s wr, 194 op/s
Dec 13 04:23:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:39 compute-0 ceph-mon[75071]: osdmap e406: 3 total, 3 up, 3 in
Dec 13 04:23:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:23:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1595: 305 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 298 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 62 MiB/s wr, 232 op/s
Dec 13 04:23:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e406 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.196 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.445 243708 DEBUG oslo_concurrency.lockutils [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.446 243708 DEBUG oslo_concurrency.lockutils [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.457 243708 INFO nova.compute.manager [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Detaching volume 79eb622a-1c5f-491e-a36b-9f26c7b645f6
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.571 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.591 243708 INFO nova.virt.block_device [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Attempting to driver detach volume 79eb622a-1c5f-491e-a36b-9f26c7b645f6 from mountpoint /dev/vdb
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.606 243708 DEBUG nova.virt.libvirt.driver [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Attempting to detach device vdb from instance 6f73a99d-0666-471f-b9c9-482c5570537a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:23:40 compute-0 nova_compute[243704]: 2025-12-13 04:23:40.607 243708 DEBUG nova.virt.libvirt.guest [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:23:40 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:23:40 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79eb622a-1c5f-491e-a36b-9f26c7b645f6">
Dec 13 04:23:40 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:23:40 compute-0 nova_compute[243704]:   </source>
Dec 13 04:23:40 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:23:40 compute-0 nova_compute[243704]:   <serial>79eb622a-1c5f-491e-a36b-9f26c7b645f6</serial>
Dec 13 04:23:40 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:23:40 compute-0 nova_compute[243704]: </disk>
Dec 13 04:23:40 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:23:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:23:40
Dec 13 04:23:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:23:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:23:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'images', 'backups', 'vms', 'cephfs.cephfs.meta', '.mgr']
Dec 13 04:23:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:23:40 compute-0 podman[273883]: 2025-12-13 04:23:40.944957536 +0000 UTC m=+0.089721730 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.021 243708 INFO nova.virt.libvirt.driver [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully detached device vdb from instance 6f73a99d-0666-471f-b9c9-482c5570537a from the persistent domain config.
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.021 243708 DEBUG nova.virt.libvirt.driver [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 6f73a99d-0666-471f-b9c9-482c5570537a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.022 243708 DEBUG nova.virt.libvirt.guest [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:23:41 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:23:41 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-79eb622a-1c5f-491e-a36b-9f26c7b645f6">
Dec 13 04:23:41 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:23:41 compute-0 nova_compute[243704]:   </source>
Dec 13 04:23:41 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:23:41 compute-0 nova_compute[243704]:   <serial>79eb622a-1c5f-491e-a36b-9f26c7b645f6</serial>
Dec 13 04:23:41 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:23:41 compute-0 nova_compute[243704]: </disk>
Dec 13 04:23:41 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:23:41 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:41.465 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.503 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765599821.5029173, 6f73a99d-0666-471f-b9c9-482c5570537a => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.505 243708 DEBUG nova.virt.libvirt.driver [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 6f73a99d-0666-471f-b9c9-482c5570537a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.508 243708 INFO nova.virt.libvirt.driver [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully detached device vdb from instance 6f73a99d-0666-471f-b9c9-482c5570537a from the live domain config.
Dec 13 04:23:41 compute-0 ceph-mon[75071]: pgmap v1595: 305 pgs: 4 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 298 active+clean; 3.2 GiB data, 3.4 GiB used, 57 GiB / 60 GiB avail; 3.1 MiB/s rd, 62 MiB/s wr, 232 op/s
Dec 13 04:23:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1596: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 3.3 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 3.1 MiB/s rd, 82 MiB/s wr, 352 op/s
Dec 13 04:23:41 compute-0 nova_compute[243704]: 2025-12-13 04:23:41.930 243708 DEBUG nova.objects.instance [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'flavor' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.095 243708 DEBUG oslo_concurrency.lockutils [None req-fd96f9cc-f12d-4d44-80d0-d57c6515a53e 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 1.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.096 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 1.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.097 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.097 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.097 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.099 243708 INFO nova.compute.manager [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Terminating instance
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.100 243708 DEBUG nova.compute.manager [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:23:42 compute-0 kernel: tap0d5a15fc-d4 (unregistering): left promiscuous mode
Dec 13 04:23:42 compute-0 NetworkManager[48899]: <info>  [1765599822.1549] device (tap0d5a15fc-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:23:42 compute-0 ovn_controller[145204]: 2025-12-13T04:23:42Z|00222|binding|INFO|Releasing lport 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 from this chassis (sb_readonly=0)
Dec 13 04:23:42 compute-0 ovn_controller[145204]: 2025-12-13T04:23:42Z|00223|binding|INFO|Setting lport 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 down in Southbound
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.170 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 ovn_controller[145204]: 2025-12-13T04:23:42Z|00224|binding|INFO|Removing iface tap0d5a15fc-d4 ovn-installed in OVS
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.175 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.188 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:a9:96 10.100.0.7'], port_security=['fa:16:3e:28:a9:96 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '6f73a99d-0666-471f-b9c9-482c5570537a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '75b261e8b1c44ab8b079f57244a812c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '73593bbb-4e2f-451d-b5a6-72524bf63628', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.238'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b5d2e886-04ee-44a8-8e42-fd2f33ff96d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.191 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 in datapath 0f93b436-b78f-4a08-8363-5ff70f1f85b9 unbound from our chassis
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.195 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0f93b436-b78f-4a08-8363-5ff70f1f85b9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.197 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.199 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[04247497-1d1e-4343-82b2-d67ca2f6d18f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.200 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 namespace which is not needed anymore
Dec 13 04:23:42 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Deactivated successfully.
Dec 13 04:23:42 compute-0 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000017.scope: Consumed 15.835s CPU time.
Dec 13 04:23:42 compute-0 systemd-machined[206767]: Machine qemu-23-instance-00000017 terminated.
Dec 13 04:23:42 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [NOTICE]   (272825) : haproxy version is 2.8.14-c23fe91
Dec 13 04:23:42 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [NOTICE]   (272825) : path to executable is /usr/sbin/haproxy
Dec 13 04:23:42 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [WARNING]  (272825) : Exiting Master process...
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.340 243708 INFO nova.virt.libvirt.driver [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Instance destroyed successfully.
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.341 243708 DEBUG nova.objects.instance [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lazy-loading 'resources' on Instance uuid 6f73a99d-0666-471f-b9c9-482c5570537a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:42 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [ALERT]    (272825) : Current worker (272827) exited with code 143 (Terminated)
Dec 13 04:23:42 compute-0 neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9[272820]: [WARNING]  (272825) : All workers exited. Exiting... (0)
Dec 13 04:23:42 compute-0 systemd[1]: libpod-dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8.scope: Deactivated successfully.
Dec 13 04:23:42 compute-0 conmon[272820]: conmon dd59b9e1504774cdb5f6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8.scope/container/memory.events
Dec 13 04:23:42 compute-0 podman[273935]: 2025-12-13 04:23:42.351742118 +0000 UTC m=+0.054488061 container died dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.352 243708 DEBUG nova.virt.libvirt.vif [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:22:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-VolumesSnapshotTestJSON-instance-1210779781',display_name='tempest-VolumesSnapshotTestJSON-instance-1210779781',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-volumessnapshottestjson-instance-1210779781',id=23,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGdWShGngG1T60fxtDCyZF/4g+yzZ0OcQNVbjGT5BNenMaU0rt/YT14rk4/InRXk6hLTctywg1ltUyOI+mrNuVvkteM3YlWW0l7NqxX4eJUcgQ1jtCfb6tS+4wG7D8mWrg==',key_name='tempest-keypair-110734835',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:22:51Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='75b261e8b1c44ab8b079f57244a812c7',ramdisk_id='',reservation_id='r-qykwdpy6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-VolumesSnapshotTestJSON-524347860',owner_user_name='tempest-VolumesSnapshotTestJSON-524347860-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:22:51Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='95b4d334bdca4149b6fe3499375d46e6',uuid=6f73a99d-0666-471f-b9c9-482c5570537a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.352 243708 DEBUG nova.network.os_vif_util [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converting VIF {"id": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "address": "fa:16:3e:28:a9:96", "network": {"id": "0f93b436-b78f-4a08-8363-5ff70f1f85b9", "bridge": "br-int", "label": "tempest-VolumesSnapshotTestJSON-1828414725-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.238", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "75b261e8b1c44ab8b079f57244a812c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0d5a15fc-d4", "ovs_interfaceid": "0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.353 243708 DEBUG nova.network.os_vif_util [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.354 243708 DEBUG os_vif [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.356 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.356 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0d5a15fc-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.363 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.367 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.369 243708 INFO os_vif [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:a9:96,bridge_name='br-int',has_traffic_filtering=True,id=0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5,network=Network(0f93b436-b78f-4a08-8363-5ff70f1f85b9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0d5a15fc-d4')
Dec 13 04:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8-userdata-shm.mount: Deactivated successfully.
Dec 13 04:23:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-1bb438e18c54d3420a4b6c8b1d1d062819862dd0b0bbd139dc66efd39e278f01-merged.mount: Deactivated successfully.
Dec 13 04:23:42 compute-0 podman[273935]: 2025-12-13 04:23:42.394411768 +0000 UTC m=+0.097157711 container cleanup dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.402 243708 DEBUG nova.compute.manager [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-unplugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.402 243708 DEBUG oslo_concurrency.lockutils [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.403 243708 DEBUG oslo_concurrency.lockutils [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.403 243708 DEBUG oslo_concurrency.lockutils [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.403 243708 DEBUG nova.compute.manager [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] No waiting events found dispatching network-vif-unplugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.403 243708 DEBUG nova.compute.manager [req-43cef6ad-c0a7-4af8-8aac-c361e01572d5 req-cee8cde2-52e0-4139-9514-6adff5bb5920 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-unplugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:23:42 compute-0 systemd[1]: libpod-conmon-dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8.scope: Deactivated successfully.
Dec 13 04:23:42 compute-0 podman[273989]: 2025-12-13 04:23:42.48533302 +0000 UTC m=+0.065682377 container remove dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.492 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[30db7006-ec59-4028-8831-c4c9296406e3]: (4, ('Sat Dec 13 04:23:42 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 (dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8)\ndd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8\nSat Dec 13 04:23:42 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 (dd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8)\ndd59b9e1504774cdb5f6c31597957be0cf99c73c8584716e0d236713b74a5dd8\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.494 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1f42d92f-d7fb-44cc-b3d9-bd0d5ce9bf59]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.496 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f93b436-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:42 compute-0 kernel: tap0f93b436-b0: left promiscuous mode
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.500 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.506 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8f4a63-0d0f-4d16-a692-e881b0b3b1fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.521 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[73507ec3-6a0c-4e05-b848-2b65d1aaa7ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.525 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9fed38-161d-4cc8-9ae2-c41b06c6ab46]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.547 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[33974651-a52e-4539-b022-c43063924578]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 450485, 'reachable_time': 22194, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274007, 'error': None, 'target': 'ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 systemd[1]: run-netns-ovnmeta\x2d0f93b436\x2db78f\x2d4a08\x2d8363\x2d5ff70f1f85b9.mount: Deactivated successfully.
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.556 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0f93b436-b78f-4a08-8363-5ff70f1f85b9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:23:42 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:42.558 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[56295167-5f6d-476c-843b-b3873fed41a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:23:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.972 243708 INFO nova.virt.libvirt.driver [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Deleting instance files /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a_del
Dec 13 04:23:42 compute-0 nova_compute[243704]: 2025-12-13 04:23:42.973 243708 INFO nova.virt.libvirt.driver [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Deletion of /var/lib/nova/instances/6f73a99d-0666-471f-b9c9-482c5570537a_del complete
Dec 13 04:23:43 compute-0 nova_compute[243704]: 2025-12-13 04:23:43.025 243708 INFO nova.compute.manager [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Took 0.93 seconds to destroy the instance on the hypervisor.
Dec 13 04:23:43 compute-0 nova_compute[243704]: 2025-12-13 04:23:43.026 243708 DEBUG oslo.service.loopingcall [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:23:43 compute-0 nova_compute[243704]: 2025-12-13 04:23:43.027 243708 DEBUG nova.compute.manager [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:23:43 compute-0 nova_compute[243704]: 2025-12-13 04:23:43.027 243708 DEBUG nova.network.neutron [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:23:43 compute-0 nova_compute[243704]: 2025-12-13 04:23:43.158 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e406 do_prune osdmap full prune enabled
Dec 13 04:23:43 compute-0 ceph-mon[75071]: pgmap v1596: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 3.3 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 3.1 MiB/s rd, 82 MiB/s wr, 352 op/s
Dec 13 04:23:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e407 e407: 3 total, 3 up, 3 in
Dec 13 04:23:43 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e407: 3 total, 3 up, 3 in
Dec 13 04:23:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1598: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 3.3 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 103 KiB/s rd, 54 MiB/s wr, 168 op/s
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.178 243708 DEBUG nova.network.neutron [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.193 243708 INFO nova.compute.manager [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Took 1.17 seconds to deallocate network for instance.
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.268 243708 DEBUG nova.compute.manager [req-64ad3d2b-aee9-4f50-91bd-3c24ee95d802 req-f74d6048-7601-43a4-8a5f-b673132026c4 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-deleted-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.511 243708 WARNING nova.volume.cinder [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Attachment c69c3dde-08e2-4b4e-8bac-d06a33aa873b does not exist. Ignoring.: cinderclient.exceptions.NotFound: Volume attachment could not be found with filter: attachment_id = c69c3dde-08e2-4b4e-8bac-d06a33aa873b. (HTTP 404) (Request-ID: req-a2e85e71-adbf-43b9-823c-3f5f1f923bfe)
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.512 243708 INFO nova.compute.manager [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Took 0.32 seconds to detach 1 volumes for instance.
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.555 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.556 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.573 243708 DEBUG nova.compute.manager [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.573 243708 DEBUG oslo_concurrency.lockutils [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.574 243708 DEBUG oslo_concurrency.lockutils [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.574 243708 DEBUG oslo_concurrency.lockutils [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.575 243708 DEBUG nova.compute.manager [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] No waiting events found dispatching network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.575 243708 WARNING nova.compute.manager [req-98f1e6b0-d49e-444c-acfa-1bc9a8d2823f req-1ad7c199-6269-4b62-b8fc-7c6de09fbb18 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Received unexpected event network-vif-plugged-0d5a15fc-d416-4d9d-b7eb-133ea0fe62f5 for instance with vm_state deleted and task_state None.
Dec 13 04:23:44 compute-0 nova_compute[243704]: 2025-12-13 04:23:44.633 243708 DEBUG oslo_concurrency.processutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:44 compute-0 ceph-mon[75071]: osdmap e407: 3 total, 3 up, 3 in
Dec 13 04:23:44 compute-0 ceph-mon[75071]: pgmap v1598: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 3.3 GiB data, 3.5 GiB used, 56 GiB / 60 GiB avail; 103 KiB/s rd, 54 MiB/s wr, 168 op/s
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e407 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e407 do_prune osdmap full prune enabled
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e408 e408: 3 total, 3 up, 3 in
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e408: 3 total, 3 up, 3 in
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2318141290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.243 243708 DEBUG oslo_concurrency.processutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.250 243708 DEBUG nova.compute.provider_tree [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.264 243708 DEBUG nova.scheduler.client.report [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.285 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.728s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.321 243708 INFO nova.scheduler.client.report [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Deleted allocations for instance 6f73a99d-0666-471f-b9c9-482c5570537a
Dec 13 04:23:45 compute-0 nova_compute[243704]: 2025-12-13 04:23:45.382 243708 DEBUG oslo_concurrency.lockutils [None req-65dc60b1-6597-4f3d-b40e-53bcccd495f3 95b4d334bdca4149b6fe3499375d46e6 75b261e8b1c44ab8b079f57244a812c7 - - default default] Lock "6f73a99d-0666-471f-b9c9-482c5570537a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713416804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713416804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:45 compute-0 ovn_controller[145204]: 2025-12-13T04:23:45Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:a4:af 10.100.0.3
Dec 13 04:23:45 compute-0 ovn_controller[145204]: 2025-12-13T04:23:45Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:a4:af 10.100.0.3
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2145758709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2145758709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1600: 305 pgs: 4 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 294 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 470 KiB/s rd, 58 MiB/s wr, 351 op/s
Dec 13 04:23:46 compute-0 ceph-mon[75071]: osdmap e408: 3 total, 3 up, 3 in
Dec 13 04:23:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2318141290' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3713416804' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3713416804' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2145758709' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2145758709' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:47 compute-0 nova_compute[243704]: 2025-12-13 04:23:47.365 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e408 do_prune osdmap full prune enabled
Dec 13 04:23:47 compute-0 ceph-mon[75071]: pgmap v1600: 305 pgs: 4 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 294 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 470 KiB/s rd, 58 MiB/s wr, 351 op/s
Dec 13 04:23:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e409 e409: 3 total, 3 up, 3 in
Dec 13 04:23:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e409: 3 total, 3 up, 3 in
Dec 13 04:23:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1602: 305 pgs: 4 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 294 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 452 KiB/s rd, 4.5 MiB/s wr, 223 op/s
Dec 13 04:23:48 compute-0 nova_compute[243704]: 2025-12-13 04:23:48.162 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e409 do_prune osdmap full prune enabled
Dec 13 04:23:48 compute-0 ceph-mon[75071]: osdmap e409: 3 total, 3 up, 3 in
Dec 13 04:23:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e410 e410: 3 total, 3 up, 3 in
Dec 13 04:23:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e410: 3 total, 3 up, 3 in
Dec 13 04:23:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/707068641' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/707068641' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:49 compute-0 ceph-mon[75071]: pgmap v1602: 305 pgs: 4 active+clean+snaptrim, 7 active+clean+snaptrim_wait, 294 active+clean; 2.3 GiB data, 2.5 GiB used, 57 GiB / 60 GiB avail; 452 KiB/s rd, 4.5 MiB/s wr, 223 op/s
Dec 13 04:23:49 compute-0 ceph-mon[75071]: osdmap e410: 3 total, 3 up, 3 in
Dec 13 04:23:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/707068641' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/707068641' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e410 do_prune osdmap full prune enabled
Dec 13 04:23:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e411 e411: 3 total, 3 up, 3 in
Dec 13 04:23:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e411: 3 total, 3 up, 3 in
Dec 13 04:23:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1605: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.0 MiB/s rd, 10 MiB/s wr, 260 op/s
Dec 13 04:23:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e411 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e411 do_prune osdmap full prune enabled
Dec 13 04:23:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e412 e412: 3 total, 3 up, 3 in
Dec 13 04:23:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e412: 3 total, 3 up, 3 in
Dec 13 04:23:50 compute-0 ceph-mon[75071]: osdmap e411: 3 total, 3 up, 3 in
Dec 13 04:23:50 compute-0 ceph-mon[75071]: pgmap v1605: 305 pgs: 2 active+clean+snaptrim, 3 active+clean+snaptrim_wait, 300 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.0 MiB/s rd, 10 MiB/s wr, 260 op/s
Dec 13 04:23:50 compute-0 ceph-mon[75071]: osdmap e412: 3 total, 3 up, 3 in
Dec 13 04:23:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e412 do_prune osdmap full prune enabled
Dec 13 04:23:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e413 e413: 3 total, 3 up, 3 in
Dec 13 04:23:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e413: 3 total, 3 up, 3 in
Dec 13 04:23:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/262974145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/262974145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1608: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 14 MiB/s wr, 264 op/s
Dec 13 04:23:51 compute-0 podman[274031]: 2025-12-13 04:23:51.952639121 +0000 UTC m=+0.084109307 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:23:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332843224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2332843224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 ceph-mon[75071]: osdmap e413: 3 total, 3 up, 3 in
Dec 13 04:23:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/262974145' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/262974145' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2332843224' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2332843224' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:52 compute-0 nova_compute[243704]: 2025-12-13 04:23:52.368 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 8.460576004710927e-06 of space, bias 1.0, pg target 0.002538172801413278 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.036248836618557476 of space, bias 1.0, pg target 10.874650985567243 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.5503817205124163e-06 of space, bias 1.0, pg target 0.0007396106989486007 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006665233647581919 of space, bias 1.0, pg target 0.19329177577987564 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.0158211922376985e-06 of space, bias 4.0, pg target 0.0011783525829957302 quantized to 16 (current 16)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011064783160773588 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012171261476850949 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:23:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00014753044214364783 quantized to 32 (current 32)
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.121 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.121 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.122 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.122 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.122 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.123 243708 INFO nova.compute.manager [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Terminating instance
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.125 243708 DEBUG nova.compute.manager [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:23:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e413 do_prune osdmap full prune enabled
Dec 13 04:23:53 compute-0 ceph-mon[75071]: pgmap v1608: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 1.1 MiB/s rd, 14 MiB/s wr, 264 op/s
Dec 13 04:23:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e414 e414: 3 total, 3 up, 3 in
Dec 13 04:23:53 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e414: 3 total, 3 up, 3 in
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.164 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 kernel: tap8e70c806-9e (unregistering): left promiscuous mode
Dec 13 04:23:53 compute-0 NetworkManager[48899]: <info>  [1765599833.2299] device (tap8e70c806-9e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:23:53 compute-0 ovn_controller[145204]: 2025-12-13T04:23:53Z|00225|binding|INFO|Releasing lport 8e70c806-9e71-427e-bba7-1012c1cdd700 from this chassis (sb_readonly=0)
Dec 13 04:23:53 compute-0 ovn_controller[145204]: 2025-12-13T04:23:53Z|00226|binding|INFO|Setting lport 8e70c806-9e71-427e-bba7-1012c1cdd700 down in Southbound
Dec 13 04:23:53 compute-0 ovn_controller[145204]: 2025-12-13T04:23:53Z|00227|binding|INFO|Removing iface tap8e70c806-9e ovn-installed in OVS
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.239 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.241 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.247 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:a4:af 10.100.0.3'], port_security=['fa:16:3e:82:a4:af 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '37541a77-deda-4940-b361-9e66c7baaf39', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b35a742a-b386-4310-84f0-5826a0beab45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=8e70c806-9e71-427e-bba7-1012c1cdd700) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.250 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 8e70c806-9e71-427e-bba7-1012c1cdd700 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 unbound from our chassis
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.254 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.256 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8d210d-d489-4f8f-8ee7-695817b16e64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.257 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace which is not needed anymore
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.276 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Deactivated successfully.
Dec 13 04:23:53 compute-0 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000018.scope: Consumed 16.984s CPU time.
Dec 13 04:23:53 compute-0 systemd-machined[206767]: Machine qemu-24-instance-00000018 terminated.
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.346 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.350 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.360 243708 INFO nova.virt.libvirt.driver [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Instance destroyed successfully.
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.360 243708 DEBUG nova.objects.instance [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'resources' on Instance uuid 37541a77-deda-4940-b361-9e66c7baaf39 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.376 243708 DEBUG nova.virt.libvirt.vif [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1495550951',display_name='tempest-TestEncryptedCinderVolumes-server-1495550951',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1495550951',id=24,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:23:32Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-mvvo3fjr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:23:32Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=37541a77-deda-4940-b361-9e66c7baaf39,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.376 243708 DEBUG nova.network.os_vif_util [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "8e70c806-9e71-427e-bba7-1012c1cdd700", "address": "fa:16:3e:82:a4:af", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e70c806-9e", "ovs_interfaceid": "8e70c806-9e71-427e-bba7-1012c1cdd700", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.378 243708 DEBUG nova.network.os_vif_util [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.378 243708 DEBUG os_vif [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.380 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.381 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e70c806-9e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.382 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.383 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.386 243708 INFO os_vif [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:a4:af,bridge_name='br-int',has_traffic_filtering=True,id=8e70c806-9e71-427e-bba7-1012c1cdd700,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8e70c806-9e')
Dec 13 04:23:53 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [NOTICE]   (273457) : haproxy version is 2.8.14-c23fe91
Dec 13 04:23:53 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [NOTICE]   (273457) : path to executable is /usr/sbin/haproxy
Dec 13 04:23:53 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [WARNING]  (273457) : Exiting Master process...
Dec 13 04:23:53 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [ALERT]    (273457) : Current worker (273461) exited with code 143 (Terminated)
Dec 13 04:23:53 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[273446]: [WARNING]  (273457) : All workers exited. Exiting... (0)
Dec 13 04:23:53 compute-0 systemd[1]: libpod-8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607.scope: Deactivated successfully.
Dec 13 04:23:53 compute-0 podman[274078]: 2025-12-13 04:23:53.448372162 +0000 UTC m=+0.075742410 container died 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.463 243708 DEBUG nova.compute.manager [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-unplugged-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.464 243708 DEBUG oslo_concurrency.lockutils [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.465 243708 DEBUG oslo_concurrency.lockutils [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.465 243708 DEBUG oslo_concurrency.lockutils [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.466 243708 DEBUG nova.compute.manager [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] No waiting events found dispatching network-vif-unplugged-8e70c806-9e71-427e-bba7-1012c1cdd700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.466 243708 DEBUG nova.compute.manager [req-3aa1a192-d11a-47f0-8ae3-71a4d75e0385 req-cb75cf59-ef7b-4072-b0a4-8d8de5f08701 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-unplugged-8e70c806-9e71-427e-bba7-1012c1cdd700 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607-userdata-shm.mount: Deactivated successfully.
Dec 13 04:23:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b1edfcaa4f81ce7643993d1e46c2186cbe45021535d3994f32326ad81ef33cb-merged.mount: Deactivated successfully.
Dec 13 04:23:53 compute-0 podman[274078]: 2025-12-13 04:23:53.501207539 +0000 UTC m=+0.128577777 container cleanup 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:23:53 compute-0 systemd[1]: libpod-conmon-8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607.scope: Deactivated successfully.
Dec 13 04:23:53 compute-0 podman[274133]: 2025-12-13 04:23:53.574736657 +0000 UTC m=+0.046667909 container remove 8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.581 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a7cae2b0-b988-4cfa-a138-24c45d1902dd]: (4, ('Sat Dec 13 04:23:53 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607)\n8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607\nSat Dec 13 04:23:53 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607)\n8aab840eb029947ef33a71de62eb0d2d4f39a0b03197733c1ef5a5a141755607\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.585 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b73ef893-6586-410d-b58d-de4a12eb9cf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.586 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.586 243708 INFO nova.virt.libvirt.driver [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Deleting instance files /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39_del
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.588 243708 INFO nova.virt.libvirt.driver [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Deletion of /var/lib/nova/instances/37541a77-deda-4940-b361-9e66c7baaf39_del complete
Dec 13 04:23:53 compute-0 kernel: tap87c0a2c3-50: left promiscuous mode
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.592 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.603 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.607 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0556ed23-d84c-4600-bb30-fb84d21cfe69]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.624 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2c334c57-cfb7-4163-b729-f9db00c7da8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.625 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[800c3775-6bf0-4c17-a2cf-989ec059eae6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.641 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[617a5fc7-ece1-49b2-a24e-609b66b0b6fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 454762, 'reachable_time': 33679, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274148, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 systemd[1]: run-netns-ovnmeta\x2d87c0a2c3\x2d5f67\x2d431b\x2d9b32\x2da688ddc2bc06.mount: Deactivated successfully.
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.644 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:23:53 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:23:53.644 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[4b943e27-6ea1-43bd-bafd-659ad0d7c618]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.655 243708 INFO nova.compute.manager [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Took 0.53 seconds to destroy the instance on the hypervisor.
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.655 243708 DEBUG oslo.service.loopingcall [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.656 243708 DEBUG nova.compute.manager [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:23:53 compute-0 nova_compute[243704]: 2025-12-13 04:23:53.656 243708 DEBUG nova.network.neutron [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:23:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/900754407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/900754407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1610: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 735 KiB/s rd, 8.8 MiB/s wr, 148 op/s
Dec 13 04:23:54 compute-0 ceph-mon[75071]: osdmap e414: 3 total, 3 up, 3 in
Dec 13 04:23:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/900754407' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/900754407' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.617 243708 DEBUG nova.network.neutron [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.648 243708 INFO nova.compute.manager [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Took 0.99 seconds to deallocate network for instance.
Dec 13 04:23:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:23:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3843241559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:23:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3843241559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.755 243708 DEBUG nova.compute.manager [req-31f5860b-3e86-4b93-bb6f-12f372696f3e req-fb7bc16b-4a01-43ee-851f-d718f481ee6b 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-deleted-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.839 243708 INFO nova.compute.manager [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Took 0.19 seconds to detach 1 volumes for instance.
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.883 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.884 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:54 compute-0 podman[274149]: 2025-12-13 04:23:54.916484332 +0000 UTC m=+0.060110505 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:23:54 compute-0 nova_compute[243704]: 2025-12-13 04:23:54.972 243708 DEBUG oslo_concurrency.processutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:23:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e414 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:23:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e414 do_prune osdmap full prune enabled
Dec 13 04:23:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e415 e415: 3 total, 3 up, 3 in
Dec 13 04:23:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e415: 3 total, 3 up, 3 in
Dec 13 04:23:55 compute-0 ceph-mon[75071]: pgmap v1610: 305 pgs: 4 active+clean+snaptrim, 10 active+clean+snaptrim_wait, 291 active+clean; 2.3 GiB data, 2.6 GiB used, 57 GiB / 60 GiB avail; 735 KiB/s rd, 8.8 MiB/s wr, 148 op/s
Dec 13 04:23:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3843241559' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:23:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3843241559' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:23:55 compute-0 ceph-mon[75071]: osdmap e415: 3 total, 3 up, 3 in
Dec 13 04:23:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:23:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2417705750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.561 243708 DEBUG nova.compute.manager [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.562 243708 DEBUG oslo_concurrency.lockutils [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "37541a77-deda-4940-b361-9e66c7baaf39-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.562 243708 DEBUG oslo_concurrency.lockutils [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.563 243708 DEBUG oslo_concurrency.lockutils [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.563 243708 DEBUG nova.compute.manager [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] No waiting events found dispatching network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.563 243708 WARNING nova.compute.manager [req-2151ab5d-a8d4-449b-a38d-ba9b693f34f6 req-f963805f-0d7e-432c-916a-95c69a5ee65e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Received unexpected event network-vif-plugged-8e70c806-9e71-427e-bba7-1012c1cdd700 for instance with vm_state deleted and task_state None.
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.564 243708 DEBUG oslo_concurrency.processutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.568 243708 DEBUG nova.compute.provider_tree [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.587 243708 DEBUG nova.scheduler.client.report [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.609 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.648 243708 INFO nova.scheduler.client.report [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Deleted allocations for instance 37541a77-deda-4940-b361-9e66c7baaf39
Dec 13 04:23:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1612: 305 pgs: 305 active+clean; 271 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 3.3 MiB/s wr, 378 op/s
Dec 13 04:23:55 compute-0 nova_compute[243704]: 2025-12-13 04:23:55.924 243708 DEBUG oslo_concurrency.lockutils [None req-54d9c3a3-002c-486d-a6f2-043b1ae1c362 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "37541a77-deda-4940-b361-9e66c7baaf39" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:23:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2417705750' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:23:57 compute-0 nova_compute[243704]: 2025-12-13 04:23:57.338 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599822.3365345, 6f73a99d-0666-471f-b9c9-482c5570537a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:23:57 compute-0 nova_compute[243704]: 2025-12-13 04:23:57.338 243708 INFO nova.compute.manager [-] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] VM Stopped (Lifecycle Event)
Dec 13 04:23:57 compute-0 nova_compute[243704]: 2025-12-13 04:23:57.364 243708 DEBUG nova.compute.manager [None req-9952818d-492f-47d3-a0b6-cdd4a1926949 - - - - - -] [instance: 6f73a99d-0666-471f-b9c9-482c5570537a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:23:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e415 do_prune osdmap full prune enabled
Dec 13 04:23:57 compute-0 ceph-mon[75071]: pgmap v1612: 305 pgs: 305 active+clean; 271 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 374 KiB/s rd, 3.3 MiB/s wr, 378 op/s
Dec 13 04:23:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e416 e416: 3 total, 3 up, 3 in
Dec 13 04:23:57 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e416: 3 total, 3 up, 3 in
Dec 13 04:23:57 compute-0 nova_compute[243704]: 2025-12-13 04:23:57.562 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:57 compute-0 nova_compute[243704]: 2025-12-13 04:23:57.828 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1614: 305 pgs: 305 active+clean; 271 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 39 KiB/s wr, 304 op/s
Dec 13 04:23:58 compute-0 nova_compute[243704]: 2025-12-13 04:23:58.167 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:23:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1210476011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:58 compute-0 nova_compute[243704]: 2025-12-13 04:23:58.430 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:23:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e416 do_prune osdmap full prune enabled
Dec 13 04:23:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e417 e417: 3 total, 3 up, 3 in
Dec 13 04:23:58 compute-0 ceph-mon[75071]: osdmap e416: 3 total, 3 up, 3 in
Dec 13 04:23:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1210476011' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:23:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e417: 3 total, 3 up, 3 in
Dec 13 04:23:59 compute-0 ceph-mon[75071]: pgmap v1614: 305 pgs: 305 active+clean; 271 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 166 KiB/s rd, 39 KiB/s wr, 304 op/s
Dec 13 04:23:59 compute-0 ceph-mon[75071]: osdmap e417: 3 total, 3 up, 3 in
Dec 13 04:23:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1616: 305 pgs: 305 active+clean; 271 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 40 KiB/s wr, 335 op/s
Dec 13 04:24:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e417 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e417 do_prune osdmap full prune enabled
Dec 13 04:24:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e418 e418: 3 total, 3 up, 3 in
Dec 13 04:24:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e418: 3 total, 3 up, 3 in
Dec 13 04:24:01 compute-0 ceph-mon[75071]: pgmap v1616: 305 pgs: 305 active+clean; 271 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 189 KiB/s rd, 40 KiB/s wr, 335 op/s
Dec 13 04:24:01 compute-0 ceph-mon[75071]: osdmap e418: 3 total, 3 up, 3 in
Dec 13 04:24:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1618: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 5.5 KiB/s wr, 103 op/s
Dec 13 04:24:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523371849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3523371849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3523371849' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3523371849' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.442 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.443 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.479 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.544 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.545 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.550 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.551 243708 INFO nova.compute.claims [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:24:02 compute-0 nova_compute[243704]: 2025-12-13 04:24:02.653 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:03 compute-0 ceph-mon[75071]: pgmap v1618: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 5.5 KiB/s wr, 103 op/s
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.168 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/984272437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.233 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.580s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.238 243708 DEBUG nova.compute.provider_tree [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.249 243708 DEBUG nova.scheduler.client.report [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.417 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.419 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.432 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.494 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.494 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.510 243708 INFO nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.536 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.581 243708 INFO nova.virt.block_device [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Booting with volume ce31c911-c79c-42c8-8c73-3fa2bd9f8007 at /dev/vda
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.688 243708 DEBUG os_brick.utils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.692 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.711 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.020s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.712 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2b094b-fa66-4f25-8b97-154aa3a04437]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.713 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.724 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.725 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[adb31486-baec-4c54-9bf8-b5bd65287487]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.727 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.740 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.741 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[df49463a-fda3-4511-a7b9-f3c5e2c3bda6]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.742 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[475ffb3c-e431-491a-bf46-6290032a112f]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.743 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.767 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "nvme version" returned: 0 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.770 243708 DEBUG os_brick.initiator.connectors.lightos [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.770 243708 DEBUG os_brick.initiator.connectors.lightos [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.771 243708 DEBUG os_brick.initiator.connectors.lightos [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.771 243708 DEBUG os_brick.utils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] <== get_connector_properties: return (81ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.771 243708 DEBUG nova.virt.block_device [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating existing volume attachment record: cad8d66c-9ceb-4de9-a46b-fd49ce6061ac _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:24:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1619: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Dec 13 04:24:03 compute-0 nova_compute[243704]: 2025-12-13 04:24:03.939 243708 DEBUG nova.policy [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '439e16bdacdd484cbdfe5b2ff762e327', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3ad8ea73576b4cf9aad3a876effca617', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:24:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e418 do_prune osdmap full prune enabled
Dec 13 04:24:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/984272437' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e419 e419: 3 total, 3 up, 3 in
Dec 13 04:24:04 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e419: 3 total, 3 up, 3 in
Dec 13 04:24:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2943708497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:04 compute-0 nova_compute[243704]: 2025-12-13 04:24:04.708 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Successfully created port: 65ae7c69-37db-4deb-9754-9061175558c0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:24:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e419 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:05 compute-0 ceph-mon[75071]: pgmap v1619: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 75 KiB/s rd, 5.2 KiB/s wr, 97 op/s
Dec 13 04:24:05 compute-0 ceph-mon[75071]: osdmap e419: 3 total, 3 up, 3 in
Dec 13 04:24:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2943708497' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1621: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 7.7 KiB/s wr, 150 op/s
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.024 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.028 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.029 243708 INFO nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Creating image(s)
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.030 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.031 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Ensure instance console log exists: /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.032 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.032 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:06 compute-0 nova_compute[243704]: 2025-12-13 04:24:06.033 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e419 do_prune osdmap full prune enabled
Dec 13 04:24:06 compute-0 ceph-mon[75071]: pgmap v1621: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 7.7 KiB/s wr, 150 op/s
Dec 13 04:24:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e420 e420: 3 total, 3 up, 3 in
Dec 13 04:24:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e420: 3 total, 3 up, 3 in
Dec 13 04:24:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513733453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/513733453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.759 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Successfully updated port: 65ae7c69-37db-4deb-9754-9061175558c0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:24:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1623: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 6.6 KiB/s wr, 119 op/s
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.883 243708 DEBUG nova.compute.manager [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-changed-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.884 243708 DEBUG nova.compute.manager [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Refreshing instance network info cache due to event network-changed-65ae7c69-37db-4deb-9754-9061175558c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.884 243708 DEBUG oslo_concurrency.lockutils [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.884 243708 DEBUG oslo_concurrency.lockutils [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:24:07 compute-0 nova_compute[243704]: 2025-12-13 04:24:07.885 243708 DEBUG nova.network.neutron [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Refreshing network info cache for port 65ae7c69-37db-4deb-9754-9061175558c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:24:07 compute-0 ceph-mon[75071]: osdmap e420: 3 total, 3 up, 3 in
Dec 13 04:24:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/513733453' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/513733453' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.042 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.122 243708 DEBUG nova.network.neutron [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.172 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.358 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599833.356628, 37541a77-deda-4940-b361-9e66c7baaf39 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.359 243708 INFO nova.compute.manager [-] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] VM Stopped (Lifecycle Event)
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.374 243708 DEBUG nova.compute.manager [None req-6882e9fc-cf08-4ab7-803e-3c0254106f10 - - - - - -] [instance: 37541a77-deda-4940-b361-9e66c7baaf39] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.421 243708 DEBUG nova.network.neutron [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.434 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.441 243708 DEBUG oslo_concurrency.lockutils [req-7830c551-c595-4002-aaf2-827ec439e2d3 req-f4709d79-6152-4d8f-9688-cc4009c056e8 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.442 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquired lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.442 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:24:08 compute-0 nova_compute[243704]: 2025-12-13 04:24:08.575 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:24:09 compute-0 ceph-mon[75071]: pgmap v1623: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 91 KiB/s rd, 6.6 KiB/s wr, 119 op/s
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.434 243708 DEBUG nova.network.neutron [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating instance_info_cache with network_info: [{"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.451 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Releasing lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.452 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance network_info: |[{"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.455 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Start _get_guest_xml network_info=[{"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '13d79491-8168-41b3-9d61-0763591f79a4', 'attached_at': '', 'detached_at': '', 'volume_id': 'ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'serial': 'ce31c911-c79c-42c8-8c73-3fa2bd9f8007'}, 'disk_bus': 'virtio', 'attachment_id': 'cad8d66c-9ceb-4de9-a46b-fd49ce6061ac', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.459 243708 WARNING nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.464 243708 DEBUG nova.virt.libvirt.host [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.465 243708 DEBUG nova.virt.libvirt.host [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.466 243708 DEBUG nova.virt.libvirt.host [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.467 243708 DEBUG nova.virt.libvirt.host [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.467 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.467 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.468 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.468 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.468 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.468 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.468 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.469 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.469 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.469 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.469 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.469 243708 DEBUG nova.virt.hardware [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:24:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2146651800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2146651800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.491 243708 DEBUG nova.storage.rbd_utils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 13d79491-8168-41b3-9d61-0763591f79a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:24:09 compute-0 nova_compute[243704]: 2025-12-13 04:24:09.494 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836057009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1836057009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1624: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Dec 13 04:24:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3615923496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.039 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.150 243708 DEBUG os_brick.encryptors [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Using volume encryption metadata '{'encryption_key_id': '30c71df5-96c1-40f5-99b6-a2b5c1159921', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': 'ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '13d79491-8168-41b3-9d61-0763591f79a4', 'attached_at': '', 'detached_at': '', 'volume_id': 'ce31c911-c79c-42c8-8c73-3fa2bd9f8007', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.154 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.170 243708 DEBUG barbicanclient.v1.secrets [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.171 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.194 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.195 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.213 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.214 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.236 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.237 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.256 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.256 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.276 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.277 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.297 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.298 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.316 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.317 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2146651800' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2146651800' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1836057009' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1836057009' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3615923496' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.340 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.342 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.365 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.366 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.392 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.393 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.411 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.412 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.435 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.436 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.534 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.535 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.557 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.558 243708 INFO barbicanclient.base [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Calculated Secrets uuid ref: secrets/30c71df5-96c1-40f5-99b6-a2b5c1159921
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.583 243708 DEBUG barbicanclient.client [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.584 243708 DEBUG nova.virt.libvirt.host [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <volume>ce31c911-c79c-42c8-8c73-3fa2bd9f8007</volume>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:24:10 compute-0 nova_compute[243704]: </secret>
Dec 13 04:24:10 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.690 243708 DEBUG nova.virt.libvirt.vif [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1423405409',display_name='tempest-TestEncryptedCinderVolumes-server-1423405409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1423405409',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-drvoszv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:24:03Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=13d79491-8168-41b3-9d61-0763591f79a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.692 243708 DEBUG nova.network.os_vif_util [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.694 243708 DEBUG nova.network.os_vif_util [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.697 243708 DEBUG nova.objects.instance [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'pci_devices' on Instance uuid 13d79491-8168-41b3-9d61-0763591f79a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.713 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <uuid>13d79491-8168-41b3-9d61-0763591f79a4</uuid>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <name>instance-00000019</name>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:name>tempest-TestEncryptedCinderVolumes-server-1423405409</nova:name>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:24:09</nova:creationTime>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:user uuid="439e16bdacdd484cbdfe5b2ff762e327">tempest-TestEncryptedCinderVolumes-1691115809-project-member</nova:user>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:project uuid="3ad8ea73576b4cf9aad3a876effca617">tempest-TestEncryptedCinderVolumes-1691115809</nova:project>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <nova:port uuid="65ae7c69-37db-4deb-9754-9061175558c0">
Dec 13 04:24:10 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <system>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="serial">13d79491-8168-41b3-9d61-0763591f79a4</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="uuid">13d79491-8168-41b3-9d61-0763591f79a4</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </system>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <os>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </os>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <features>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </features>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/13d79491-8168-41b3-9d61-0763591f79a4_disk.config">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </source>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-ce31c911-c79c-42c8-8c73-3fa2bd9f8007">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </source>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <serial>ce31c911-c79c-42c8-8c73-3fa2bd9f8007</serial>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:24:10 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="d32a764d-dca4-4132-874b-4c34bb5f395c"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:e5:29:8f"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <target dev="tap65ae7c69-37"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/console.log" append="off"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <video>
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </video>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:24:10 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:24:10 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:24:10 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:24:10 compute-0 nova_compute[243704]: </domain>
Dec 13 04:24:10 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.717 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Preparing to wait for external event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.717 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.718 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.718 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.720 243708 DEBUG nova.virt.libvirt.vif [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1423405409',display_name='tempest-TestEncryptedCinderVolumes-server-1423405409',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1423405409',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-drvoszv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:24:03Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=13d79491-8168-41b3-9d61-0763591f79a4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.721 243708 DEBUG nova.network.os_vif_util [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.722 243708 DEBUG nova.network.os_vif_util [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.723 243708 DEBUG os_vif [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.724 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.725 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.726 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.730 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.731 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap65ae7c69-37, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.731 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap65ae7c69-37, col_values=(('external_ids', {'iface-id': '65ae7c69-37db-4deb-9754-9061175558c0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e5:29:8f', 'vm-uuid': '13d79491-8168-41b3-9d61-0763591f79a4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.733 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:10 compute-0 NetworkManager[48899]: <info>  [1765599850.7350] manager: (tap65ae7c69-37): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/124)
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.735 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.740 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.742 243708 INFO os_vif [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37')
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.803 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.804 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.804 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] No VIF found with MAC fa:16:3e:e5:29:8f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.804 243708 INFO nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Using config drive
Dec 13 04:24:10 compute-0 nova_compute[243704]: 2025-12-13 04:24:10.830 243708 DEBUG nova.storage.rbd_utils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 13d79491-8168-41b3-9d61-0763591f79a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:24:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1431428376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1431428376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.222 243708 INFO nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Creating config drive at /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.232 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpypjic101 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:11 compute-0 ceph-mon[75071]: pgmap v1624: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.1 KiB/s wr, 70 op/s
Dec 13 04:24:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1431428376' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1431428376' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.372 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpypjic101" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.407 243708 DEBUG nova.storage.rbd_utils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] rbd image 13d79491-8168-41b3-9d61-0763591f79a4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.411 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config 13d79491-8168-41b3-9d61-0763591f79a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.523 243708 DEBUG oslo_concurrency.processutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config 13d79491-8168-41b3-9d61-0763591f79a4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.112s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.523 243708 INFO nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Deleting local config drive /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4/disk.config because it was imported into RBD.
Dec 13 04:24:11 compute-0 kernel: tap65ae7c69-37: entered promiscuous mode
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.5788] manager: (tap65ae7c69-37): new Tun device (/org/freedesktop/NetworkManager/Devices/125)
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.580 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_controller[145204]: 2025-12-13T04:24:11Z|00228|binding|INFO|Claiming lport 65ae7c69-37db-4deb-9754-9061175558c0 for this chassis.
Dec 13 04:24:11 compute-0 ovn_controller[145204]: 2025-12-13T04:24:11Z|00229|binding|INFO|65ae7c69-37db-4deb-9754-9061175558c0: Claiming fa:16:3e:e5:29:8f 10.100.0.14
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.591 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:29:8f 10.100.0.14'], port_security=['fa:16:3e:e5:29:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '13d79491-8168-41b3-9d61-0763591f79a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b35a742a-b386-4310-84f0-5826a0beab45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=65ae7c69-37db-4deb-9754-9061175558c0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.594 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 65ae7c69-37db-4deb-9754-9061175558c0 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 bound to our chassis
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.597 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:24:11 compute-0 systemd-udevd[274345]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.610 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b14d55-6cb0-44c8-bf8f-394cee1cc8ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.611 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap87c0a2c3-51 in ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.615 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap87c0a2c3-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.616 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a6881f95-d17a-4adf-be4d-4a976bf5f82e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.6168] device (tap65ae7c69-37): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.6173] device (tap65ae7c69-37): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.617 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb36b50-7489-4934-a766-753a82370c6b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 systemd-machined[206767]: New machine qemu-25-instance-00000019.
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.632 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[18d743cc-0dc6-4d14-9ec0-7b5bba65844e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 systemd[1]: Started Virtual Machine qemu-25-instance-00000019.
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.658 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f88c97ba-947c-48d3-aa00-aa63cfd3b72d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.682 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 podman[274322]: 2025-12-13 04:24:11.684734805 +0000 UTC m=+0.119002907 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.692 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_controller[145204]: 2025-12-13T04:24:11Z|00230|binding|INFO|Setting lport 65ae7c69-37db-4deb-9754-9061175558c0 ovn-installed in OVS
Dec 13 04:24:11 compute-0 ovn_controller[145204]: 2025-12-13T04:24:11Z|00231|binding|INFO|Setting lport 65ae7c69-37db-4deb-9754-9061175558c0 up in Southbound
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.697 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.697 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2371eb-edbe-4983-8fa8-9d9af7136ffc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.7044] manager: (tap87c0a2c3-50): new Veth device (/org/freedesktop/NetworkManager/Devices/126)
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.704 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a7697a75-630a-4e8f-88bf-3a8891d8c825]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.732 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b1284ace-7139-48ff-863c-a756cd2df533]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.735 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[055e8a5d-131d-4644-832b-0ce16554be06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.7546] device (tap87c0a2c3-50): carrier: link connected
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.760 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[0f179c23-62d3-4c08-b55e-d7f3e9283fb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.776 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2e33884b-0b37-4796-9ced-fb68a9116cac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459043, 'reachable_time': 32140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 274392, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.790 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5ac0d869-9a23-47ae-916a-bfc17ba9d3db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:9abe'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 459043, 'tstamp': 459043}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 274393, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.802 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6bb2cebf-5631-4fb0-91d1-8c5b96870ace]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap87c0a2c3-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:9a:be'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 79], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459043, 'reachable_time': 32140, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 274394, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.830 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4940b693-7967-4fca-9597-726ee548b178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1625: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.878 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[835005cb-aa2b-4697-8faa-b75ea7f5ee44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.883 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.884 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.884 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87c0a2c3-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.886 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 NetworkManager[48899]: <info>  [1765599851.8874] manager: (tap87c0a2c3-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/127)
Dec 13 04:24:11 compute-0 kernel: tap87c0a2c3-50: entered promiscuous mode
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.889 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.890 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap87c0a2c3-50, col_values=(('external_ids', {'iface-id': '4a1239ec-278e-40d8-aa2f-d801913596a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.891 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_controller[145204]: 2025-12-13T04:24:11Z|00232|binding|INFO|Releasing lport 4a1239ec-278e-40d8-aa2f-d801913596a6 from this chassis (sb_readonly=0)
Dec 13 04:24:11 compute-0 nova_compute[243704]: 2025-12-13 04:24:11.908 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.909 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.910 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[183ef564-9dd2-4af3-952c-35658d2ffe39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.910 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.pid.haproxy
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 87c0a2c3-5f67-431b-9b32-a688ddc2bc06
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:24:11 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:11.911 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'env', 'PROCESS_TAG=haproxy-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/87c0a2c3-5f67-431b-9b32-a688ddc2bc06.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:24:12 compute-0 podman[274444]: 2025-12-13 04:24:12.239540957 +0000 UTC m=+0.026687737 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:24:12 compute-0 podman[274444]: 2025-12-13 04:24:12.352700782 +0000 UTC m=+0.139847542 container create 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:12 compute-0 systemd[1]: Started libpod-conmon-5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67.scope.
Dec 13 04:24:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/536745ca821d8ddeb17dd61872f6d766a7f6321621995163ecf7512b2fbab6f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:12 compute-0 podman[274444]: 2025-12-13 04:24:12.458339754 +0000 UTC m=+0.245486564 container init 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:24:12 compute-0 podman[274444]: 2025-12-13 04:24:12.464634865 +0000 UTC m=+0.251781625 container start 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:24:12 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [NOTICE]   (274482) : New worker (274484) forked
Dec 13 04:24:12 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [NOTICE]   (274482) : Loading success.
Dec 13 04:24:12 compute-0 nova_compute[243704]: 2025-12-13 04:24:12.538 243708 DEBUG nova.compute.manager [req-16c78cda-064e-4421-8db4-fe9544378385 req-b4279f5e-32cb-4b3e-a44d-3b7e53ebbec3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:12 compute-0 nova_compute[243704]: 2025-12-13 04:24:12.539 243708 DEBUG oslo_concurrency.lockutils [req-16c78cda-064e-4421-8db4-fe9544378385 req-b4279f5e-32cb-4b3e-a44d-3b7e53ebbec3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:12 compute-0 nova_compute[243704]: 2025-12-13 04:24:12.540 243708 DEBUG oslo_concurrency.lockutils [req-16c78cda-064e-4421-8db4-fe9544378385 req-b4279f5e-32cb-4b3e-a44d-3b7e53ebbec3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:12 compute-0 nova_compute[243704]: 2025-12-13 04:24:12.541 243708 DEBUG oslo_concurrency.lockutils [req-16c78cda-064e-4421-8db4-fe9544378385 req-b4279f5e-32cb-4b3e-a44d-3b7e53ebbec3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:12 compute-0 nova_compute[243704]: 2025-12-13 04:24:12.541 243708 DEBUG nova.compute.manager [req-16c78cda-064e-4421-8db4-fe9544378385 req-b4279f5e-32cb-4b3e-a44d-3b7e53ebbec3 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Processing event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:24:13 compute-0 nova_compute[243704]: 2025-12-13 04:24:13.197 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:13 compute-0 ceph-mon[75071]: pgmap v1625: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 5.0 KiB/s wr, 115 op/s
Dec 13 04:24:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1626: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 3.7 KiB/s wr, 88 op/s
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.596 243708 DEBUG nova.compute.manager [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.596 243708 DEBUG oslo_concurrency.lockutils [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.596 243708 DEBUG oslo_concurrency.lockutils [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.597 243708 DEBUG oslo_concurrency.lockutils [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.597 243708 DEBUG nova.compute.manager [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] No waiting events found dispatching network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.597 243708 WARNING nova.compute.manager [req-6608136b-eca6-44f9-8b83-f39c66a9819c req-adde3b1f-4618-49e9-93fc-516c352fb283 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received unexpected event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 for instance with vm_state building and task_state spawning.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.828 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599854.827802, 13d79491-8168-41b3-9d61-0763591f79a4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.829 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] VM Started (Lifecycle Event)
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.832 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.835 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.840 243708 INFO nova.virt.libvirt.driver [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance spawned successfully.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.841 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.848 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.858 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.866 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.867 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.868 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.869 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.869 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.870 243708 DEBUG nova.virt.libvirt.driver [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.877 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.878 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599854.8283908, 13d79491-8168-41b3-9d61-0763591f79a4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.878 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] VM Paused (Lifecycle Event)
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.900 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.904 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765599854.8353996, 13d79491-8168-41b3-9d61-0763591f79a4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.905 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] VM Resumed (Lifecycle Event)
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.924 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.928 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.931 243708 INFO nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Took 8.91 seconds to spawn the instance on the hypervisor.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.931 243708 DEBUG nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.951 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.981 243708 INFO nova.compute.manager [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Took 12.46 seconds to build instance.
Dec 13 04:24:14 compute-0 nova_compute[243704]: 2025-12-13 04:24:14.995 243708 DEBUG oslo_concurrency.lockutils [None req-cf359cb3-9480-4eb4-87c1-cdd0f92aa4ad 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e420 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e420 do_prune osdmap full prune enabled
Dec 13 04:24:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e421 e421: 3 total, 3 up, 3 in
Dec 13 04:24:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e421: 3 total, 3 up, 3 in
Dec 13 04:24:15 compute-0 ceph-mon[75071]: pgmap v1626: 305 pgs: 305 active+clean; 271 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 3.7 KiB/s wr, 88 op/s
Dec 13 04:24:15 compute-0 ceph-mon[75071]: osdmap e421: 3 total, 3 up, 3 in
Dec 13 04:24:15 compute-0 nova_compute[243704]: 2025-12-13 04:24:15.736 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1628: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 19 KiB/s wr, 95 op/s
Dec 13 04:24:17 compute-0 ceph-mon[75071]: pgmap v1628: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 19 KiB/s wr, 95 op/s
Dec 13 04:24:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1629: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 17 KiB/s wr, 86 op/s
Dec 13 04:24:18 compute-0 nova_compute[243704]: 2025-12-13 04:24:18.199 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1630: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 17 KiB/s wr, 114 op/s
Dec 13 04:24:19 compute-0 nova_compute[243704]: 2025-12-13 04:24:19.871 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:20 compute-0 ceph-mon[75071]: pgmap v1629: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 17 KiB/s wr, 86 op/s
Dec 13 04:24:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e421 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:20 compute-0 nova_compute[243704]: 2025-12-13 04:24:20.784 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:20 compute-0 nova_compute[243704]: 2025-12-13 04:24:20.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:20 compute-0 nova_compute[243704]: 2025-12-13 04:24:20.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:24:20 compute-0 nova_compute[243704]: 2025-12-13 04:24:20.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.228 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.228 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.229 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.229 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 13d79491-8168-41b3-9d61-0763591f79a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:24:21 compute-0 ceph-mon[75071]: pgmap v1630: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 17 KiB/s wr, 114 op/s
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.392 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:21 compute-0 NetworkManager[48899]: <info>  [1765599861.4090] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/128)
Dec 13 04:24:21 compute-0 NetworkManager[48899]: <info>  [1765599861.4100] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/129)
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.482 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:21 compute-0 ovn_controller[145204]: 2025-12-13T04:24:21Z|00233|binding|INFO|Releasing lport 4a1239ec-278e-40d8-aa2f-d801913596a6 from this chassis (sb_readonly=0)
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.492 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.705 243708 DEBUG nova.compute.manager [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-changed-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.706 243708 DEBUG nova.compute.manager [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Refreshing instance network info cache due to event network-changed-65ae7c69-37db-4deb-9754-9061175558c0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:24:21 compute-0 nova_compute[243704]: 2025-12-13 04:24:21.706 243708 DEBUG oslo_concurrency.lockutils [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:24:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1195869491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1631: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 121 op/s
Dec 13 04:24:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e421 do_prune osdmap full prune enabled
Dec 13 04:24:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1195869491' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e422 e422: 3 total, 3 up, 3 in
Dec 13 04:24:22 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e422: 3 total, 3 up, 3 in
Dec 13 04:24:22 compute-0 podman[274501]: 2025-12-13 04:24:22.92376059 +0000 UTC m=+0.068787892 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.132 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating instance_info_cache with network_info: [{"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.226 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.267 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.267 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.268 243708 DEBUG oslo_concurrency.lockutils [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.268 243708 DEBUG nova.network.neutron [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Refreshing network info cache for port 65ae7c69-37db-4deb-9754-9061175558c0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.269 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.270 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.270 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.270 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.299 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.300 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.301 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.301 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.302 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e422 do_prune osdmap full prune enabled
Dec 13 04:24:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e423 e423: 3 total, 3 up, 3 in
Dec 13 04:24:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e423: 3 total, 3 up, 3 in
Dec 13 04:24:23 compute-0 ceph-mon[75071]: pgmap v1631: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 16 KiB/s wr, 121 op/s
Dec 13 04:24:23 compute-0 ceph-mon[75071]: osdmap e422: 3 total, 3 up, 3 in
Dec 13 04:24:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/743936282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1634: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 511 B/s wr, 96 op/s
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.854 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.932 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:24:23 compute-0 nova_compute[243704]: 2025-12-13 04:24:23.932 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-00000019 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.094 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.095 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4158MB free_disk=59.987794645130634GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.095 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.096 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.191 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 13d79491-8168-41b3-9d61-0763591f79a4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.192 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.192 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.209 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.226 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.226 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.242 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.265 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.295 243708 DEBUG nova.network.neutron [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updated VIF entry in instance network info cache for port 65ae7c69-37db-4deb-9754-9061175558c0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.296 243708 DEBUG nova.network.neutron [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating instance_info_cache with network_info: [{"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.313 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.337 243708 DEBUG oslo_concurrency.lockutils [req-44beb708-a7e1-4624-bc53-40844959a2e6 req-818f5063-98ce-426b-ac6a-8af0ee54faa0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-13d79491-8168-41b3-9d61-0763591f79a4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:24:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e423 do_prune osdmap full prune enabled
Dec 13 04:24:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e424 e424: 3 total, 3 up, 3 in
Dec 13 04:24:24 compute-0 ceph-mon[75071]: osdmap e423: 3 total, 3 up, 3 in
Dec 13 04:24:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/743936282' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:24 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e424: 3 total, 3 up, 3 in
Dec 13 04:24:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3130156284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/598303607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.898 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.906 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:24:24 compute-0 nova_compute[243704]: 2025-12-13 04:24:24.925 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:24:25 compute-0 nova_compute[243704]: 2025-12-13 04:24:25.038 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:24:25 compute-0 nova_compute[243704]: 2025-12-13 04:24:25.039 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e424 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e424 do_prune osdmap full prune enabled
Dec 13 04:24:25 compute-0 ceph-mon[75071]: pgmap v1634: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 511 B/s wr, 96 op/s
Dec 13 04:24:25 compute-0 ceph-mon[75071]: osdmap e424: 3 total, 3 up, 3 in
Dec 13 04:24:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3130156284' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/598303607' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e425 e425: 3 total, 3 up, 3 in
Dec 13 04:24:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e425: 3 total, 3 up, 3 in
Dec 13 04:24:25 compute-0 nova_compute[243704]: 2025-12-13 04:24:25.645 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:25 compute-0 nova_compute[243704]: 2025-12-13 04:24:25.835 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1637: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 7.7 KiB/s wr, 83 op/s
Dec 13 04:24:25 compute-0 nova_compute[243704]: 2025-12-13 04:24:25.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:25 compute-0 podman[274564]: 2025-12-13 04:24:25.926958359 +0000 UTC m=+0.059993952 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:24:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e425 do_prune osdmap full prune enabled
Dec 13 04:24:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e426 e426: 3 total, 3 up, 3 in
Dec 13 04:24:26 compute-0 ceph-mon[75071]: osdmap e425: 3 total, 3 up, 3 in
Dec 13 04:24:26 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e426: 3 total, 3 up, 3 in
Dec 13 04:24:27 compute-0 ceph-mon[75071]: pgmap v1637: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 7.7 KiB/s wr, 83 op/s
Dec 13 04:24:27 compute-0 ceph-mon[75071]: osdmap e426: 3 total, 3 up, 3 in
Dec 13 04:24:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1639: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 6.9 KiB/s wr, 75 op/s
Dec 13 04:24:28 compute-0 nova_compute[243704]: 2025-12-13 04:24:28.231 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e426 do_prune osdmap full prune enabled
Dec 13 04:24:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e427 e427: 3 total, 3 up, 3 in
Dec 13 04:24:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e427: 3 total, 3 up, 3 in
Dec 13 04:24:28 compute-0 nova_compute[243704]: 2025-12-13 04:24:28.871 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:28 compute-0 ovn_controller[145204]: 2025-12-13T04:24:28Z|00056|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.14
Dec 13 04:24:28 compute-0 ovn_controller[145204]: 2025-12-13T04:24:28Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e5:29:8f 10.100.0.14
Dec 13 04:24:28 compute-0 nova_compute[243704]: 2025-12-13 04:24:28.954 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:24:28 compute-0 nova_compute[243704]: 2025-12-13 04:24:28.955 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:24:29 compute-0 ceph-mon[75071]: pgmap v1639: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 6.9 KiB/s wr, 75 op/s
Dec 13 04:24:29 compute-0 ceph-mon[75071]: osdmap e427: 3 total, 3 up, 3 in
Dec 13 04:24:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2080149999' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2080149999' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1641: 305 pgs: 305 active+clean; 275 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 761 KiB/s wr, 145 op/s
Dec 13 04:24:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2820149077' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e427 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e427 do_prune osdmap full prune enabled
Dec 13 04:24:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2080149999' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2080149999' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2820149077' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:30 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e428 e428: 3 total, 3 up, 3 in
Dec 13 04:24:30 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e428: 3 total, 3 up, 3 in
Dec 13 04:24:30 compute-0 nova_compute[243704]: 2025-12-13 04:24:30.889 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e428 do_prune osdmap full prune enabled
Dec 13 04:24:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e429 e429: 3 total, 3 up, 3 in
Dec 13 04:24:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e429: 3 total, 3 up, 3 in
Dec 13 04:24:31 compute-0 ceph-mon[75071]: pgmap v1641: 305 pgs: 305 active+clean; 275 MiB data, 564 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 761 KiB/s wr, 145 op/s
Dec 13 04:24:31 compute-0 ceph-mon[75071]: osdmap e428: 3 total, 3 up, 3 in
Dec 13 04:24:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1644: 305 pgs: 305 active+clean; 283 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 04:24:32 compute-0 ceph-mon[75071]: osdmap e429: 3 total, 3 up, 3 in
Dec 13 04:24:33 compute-0 ovn_controller[145204]: 2025-12-13T04:24:33Z|00058|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.3 does not match offer 10.100.0.14
Dec 13 04:24:33 compute-0 ovn_controller[145204]: 2025-12-13T04:24:33Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:e5:29:8f 10.100.0.14
Dec 13 04:24:33 compute-0 nova_compute[243704]: 2025-12-13 04:24:33.287 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e429 do_prune osdmap full prune enabled
Dec 13 04:24:33 compute-0 ceph-mon[75071]: pgmap v1644: 305 pgs: 305 active+clean; 283 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 04:24:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e430 e430: 3 total, 3 up, 3 in
Dec 13 04:24:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e430: 3 total, 3 up, 3 in
Dec 13 04:24:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1646: 305 pgs: 305 active+clean; 283 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 04:24:33 compute-0 ovn_controller[145204]: 2025-12-13T04:24:33Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e5:29:8f 10.100.0.14
Dec 13 04:24:33 compute-0 ovn_controller[145204]: 2025-12-13T04:24:33Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e5:29:8f 10.100.0.14
Dec 13 04:24:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1263055856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1263055856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:34 compute-0 ceph-mon[75071]: osdmap e430: 3 total, 3 up, 3 in
Dec 13 04:24:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1263055856' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1263055856' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:35.101 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:35.102 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:35.103 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e430 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e430 do_prune osdmap full prune enabled
Dec 13 04:24:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e431 e431: 3 total, 3 up, 3 in
Dec 13 04:24:35 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e431: 3 total, 3 up, 3 in
Dec 13 04:24:35 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:35 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1650541600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:35 compute-0 ceph-mon[75071]: pgmap v1646: 305 pgs: 305 active+clean; 283 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.3 MiB/s wr, 249 op/s
Dec 13 04:24:35 compute-0 ceph-mon[75071]: osdmap e431: 3 total, 3 up, 3 in
Dec 13 04:24:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1650541600' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1648: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 811 KiB/s wr, 194 op/s
Dec 13 04:24:35 compute-0 nova_compute[243704]: 2025-12-13 04:24:35.947 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e431 do_prune osdmap full prune enabled
Dec 13 04:24:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e432 e432: 3 total, 3 up, 3 in
Dec 13 04:24:36 compute-0 ceph-mon[75071]: pgmap v1648: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 811 KiB/s wr, 194 op/s
Dec 13 04:24:36 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e432: 3 total, 3 up, 3 in
Dec 13 04:24:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e432 do_prune osdmap full prune enabled
Dec 13 04:24:37 compute-0 ceph-mon[75071]: osdmap e432: 3 total, 3 up, 3 in
Dec 13 04:24:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e433 e433: 3 total, 3 up, 3 in
Dec 13 04:24:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e433: 3 total, 3 up, 3 in
Dec 13 04:24:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1651: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1010 KiB/s wr, 241 op/s
Dec 13 04:24:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576497087' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:38 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3576497087' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:24:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 110K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.69 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 44.92 MB, 0.07 MB/s
                                           Interval WAL: 16K writes, 6708 syncs, 2.40 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:24:38 compute-0 nova_compute[243704]: 2025-12-13 04:24:38.289 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:38 compute-0 ceph-mon[75071]: osdmap e433: 3 total, 3 up, 3 in
Dec 13 04:24:38 compute-0 ceph-mon[75071]: pgmap v1651: 305 pgs: 2 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 301 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1010 KiB/s wr, 241 op/s
Dec 13 04:24:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3576497087' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:38 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3576497087' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:38 compute-0 sudo[274584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:24:38 compute-0 sudo[274584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:38 compute-0 sudo[274584]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:38 compute-0 sudo[274609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:24:38 compute-0 sudo[274609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1610257300' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1610257300' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:39 compute-0 sudo[274609]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:24:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:39 compute-0 sudo[274666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:24:39 compute-0 sudo[274666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:39 compute-0 sudo[274666]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1610257300' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1610257300' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:24:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:24:39 compute-0 sudo[274691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:24:39 compute-0 sudo[274691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1652: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 720 KiB/s wr, 225 op/s
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.087179976 +0000 UTC m=+0.071282600 container create 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:24:40 compute-0 systemd[1]: Started libpod-conmon-66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3.scope.
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.056844421 +0000 UTC m=+0.040947085 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.21426933 +0000 UTC m=+0.198372034 container init 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.228118057 +0000 UTC m=+0.212220691 container start 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.233544434 +0000 UTC m=+0.217647128 container attach 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 04:24:40 compute-0 kind_almeida[274745]: 167 167
Dec 13 04:24:40 compute-0 systemd[1]: libpod-66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3.scope: Deactivated successfully.
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.237224984 +0000 UTC m=+0.221327618 container died 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Dec 13 04:24:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-0f66a32c340697ca9d6ccb32a8693a7dc70205710e76c04065ce9ef52eea3553-merged.mount: Deactivated successfully.
Dec 13 04:24:40 compute-0 podman[274729]: 2025-12-13 04:24:40.295117598 +0000 UTC m=+0.279220212 container remove 66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_almeida, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:40 compute-0 systemd[1]: libpod-conmon-66eefe618a83ee089ae0de740f27076d0b12d32b93ab1dfe72d6a25cd9fd66d3.scope: Deactivated successfully.
Dec 13 04:24:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e433 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e433 do_prune osdmap full prune enabled
Dec 13 04:24:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e434 e434: 3 total, 3 up, 3 in
Dec 13 04:24:40 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e434: 3 total, 3 up, 3 in
Dec 13 04:24:40 compute-0 podman[274769]: 2025-12-13 04:24:40.566184447 +0000 UTC m=+0.077601491 container create b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:24:40 compute-0 systemd[1]: Started libpod-conmon-b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0.scope.
Dec 13 04:24:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:24:40
Dec 13 04:24:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:24:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:24:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'backups', 'images', 'default.rgw.control', 'vms']
Dec 13 04:24:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:24:40 compute-0 podman[274769]: 2025-12-13 04:24:40.539683876 +0000 UTC m=+0.051100960 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:40 compute-0 podman[274769]: 2025-12-13 04:24:40.664800878 +0000 UTC m=+0.176217942 container init b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:24:40 compute-0 podman[274769]: 2025-12-13 04:24:40.672183178 +0000 UTC m=+0.183600232 container start b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:24:40 compute-0 podman[274769]: 2025-12-13 04:24:40.676209678 +0000 UTC m=+0.187626722 container attach b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:24:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899621048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/899621048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:40 compute-0 nova_compute[243704]: 2025-12-13 04:24:40.950 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:41 compute-0 nostalgic_gould[274786]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:24:41 compute-0 nostalgic_gould[274786]: --> All data devices are unavailable
Dec 13 04:24:41 compute-0 systemd[1]: libpod-b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0.scope: Deactivated successfully.
Dec 13 04:24:41 compute-0 podman[274769]: 2025-12-13 04:24:41.239221243 +0000 UTC m=+0.750638287 container died b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:24:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-490673ce0ef778efad68bc859923efbe150c5b2eb59171d5e9839e7963383737-merged.mount: Deactivated successfully.
Dec 13 04:24:41 compute-0 podman[274769]: 2025-12-13 04:24:41.293520459 +0000 UTC m=+0.804937533 container remove b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:24:41 compute-0 systemd[1]: libpod-conmon-b434af60594d846cf46e518dfcbfe02268236da8078b36ab869495ea4d0f9db0.scope: Deactivated successfully.
Dec 13 04:24:41 compute-0 sudo[274691]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:41 compute-0 ceph-mon[75071]: pgmap v1652: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 846 KiB/s rd, 720 KiB/s wr, 225 op/s
Dec 13 04:24:41 compute-0 ceph-mon[75071]: osdmap e434: 3 total, 3 up, 3 in
Dec 13 04:24:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/899621048' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/899621048' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:41 compute-0 sudo[274818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:24:41 compute-0 sudo[274818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:41 compute-0 sudo[274818]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:41 compute-0 sudo[274843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:24:41 compute-0 sudo[274843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e434 do_prune osdmap full prune enabled
Dec 13 04:24:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e435 e435: 3 total, 3 up, 3 in
Dec 13 04:24:41 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e435: 3 total, 3 up, 3 in
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.840032455 +0000 UTC m=+0.081064295 container create 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:24:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1655: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 15 KiB/s wr, 109 op/s
Dec 13 04:24:41 compute-0 systemd[1]: Started libpod-conmon-18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359.scope.
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.792619976 +0000 UTC m=+0.033651846 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.940989349 +0000 UTC m=+0.182021219 container init 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.950024966 +0000 UTC m=+0.191056836 container start 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.95460955 +0000 UTC m=+0.195641390 container attach 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Dec 13 04:24:41 compute-0 keen_knuth[274907]: 167 167
Dec 13 04:24:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532921479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:41 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/532921479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:41 compute-0 systemd[1]: libpod-18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359.scope: Deactivated successfully.
Dec 13 04:24:41 compute-0 podman[274880]: 2025-12-13 04:24:41.982914329 +0000 UTC m=+0.223946189 container died 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:24:42 compute-0 podman[274894]: 2025-12-13 04:24:42.005434861 +0000 UTC m=+0.144212631 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec 13 04:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-754278798e28ce78a8f36d1f40b27b3fc141b32b49885194c49e97c598cba9db-merged.mount: Deactivated successfully.
Dec 13 04:24:42 compute-0 podman[274880]: 2025-12-13 04:24:42.046363604 +0000 UTC m=+0.287395444 container remove 18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_knuth, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Dec 13 04:24:42 compute-0 systemd[1]: libpod-conmon-18bdeb0adf664fae8e44f23ee00699f8d031d99a0dcced443c2d6a7062c33359.scope: Deactivated successfully.
Dec 13 04:24:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1252743244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1252743244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.22647017 +0000 UTC m=+0.045532618 container create 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:24:42 compute-0 systemd[1]: Started libpod-conmon-18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf.scope.
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.20734335 +0000 UTC m=+0.026405798 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:42 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2740831a5499fa43a6379d9b48aa756353daa4a4bb2cfce41428127121627a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2740831a5499fa43a6379d9b48aa756353daa4a4bb2cfce41428127121627a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2740831a5499fa43a6379d9b48aa756353daa4a4bb2cfce41428127121627a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2740831a5499fa43a6379d9b48aa756353daa4a4bb2cfce41428127121627a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.33646515 +0000 UTC m=+0.155527578 container init 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.348542859 +0000 UTC m=+0.167605287 container start 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.352338512 +0000 UTC m=+0.171400940 container attach 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]: {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     "0": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "devices": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "/dev/loop3"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             ],
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_name": "ceph_lv0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_size": "21470642176",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "name": "ceph_lv0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "tags": {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_name": "ceph",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.crush_device_class": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.encrypted": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.objectstore": "bluestore",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_id": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.vdo": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.with_tpm": "0"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             },
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "vg_name": "ceph_vg0"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         }
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     ],
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     "1": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "devices": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "/dev/loop4"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             ],
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_name": "ceph_lv1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_size": "21470642176",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "name": "ceph_lv1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "tags": {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_name": "ceph",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.crush_device_class": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.encrypted": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.objectstore": "bluestore",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_id": "1",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.vdo": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.with_tpm": "0"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             },
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "vg_name": "ceph_vg1"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         }
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     ],
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     "2": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "devices": [
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "/dev/loop5"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             ],
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_name": "ceph_lv2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_size": "21470642176",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "name": "ceph_lv2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "tags": {
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.cluster_name": "ceph",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.crush_device_class": "",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.encrypted": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.objectstore": "bluestore",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osd_id": "2",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.vdo": "0",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:                 "ceph.with_tpm": "0"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             },
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "type": "block",
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:             "vg_name": "ceph_vg2"
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:         }
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]:     ]
Dec 13 04:24:42 compute-0 adoring_meninsky[274961]: }
Dec 13 04:24:42 compute-0 ceph-mon[75071]: osdmap e435: 3 total, 3 up, 3 in
Dec 13 04:24:42 compute-0 ceph-mon[75071]: pgmap v1655: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 78 KiB/s rd, 15 KiB/s wr, 109 op/s
Dec 13 04:24:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/532921479' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/532921479' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1252743244' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1252743244' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:42 compute-0 systemd[1]: libpod-18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf.scope: Deactivated successfully.
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.680975445 +0000 UTC m=+0.500037863 container died 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:24:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b2740831a5499fa43a6379d9b48aa756353daa4a4bb2cfce41428127121627a-merged.mount: Deactivated successfully.
Dec 13 04:24:42 compute-0 podman[274945]: 2025-12-13 04:24:42.732070515 +0000 UTC m=+0.551132933 container remove 18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_meninsky, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:24:42 compute-0 systemd[1]: libpod-conmon-18d7c05bbd463fc50a00935ce941ee604b5206d45726ce99fde32ad1ac9bdebf.scope: Deactivated successfully.
Dec 13 04:24:42 compute-0 sudo[274843]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:42 compute-0 sudo[274980]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:24:42 compute-0 sudo[274980]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:42 compute-0 sudo[274980]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:24:42 compute-0 sudo[275005]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:24:42 compute-0 sudo[275005]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:24:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.173099414 +0000 UTC m=+0.041845969 container create bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:24:43 compute-0 systemd[1]: Started libpod-conmon-bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa.scope.
Dec 13 04:24:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.155166497 +0000 UTC m=+0.023913082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:43 compute-0 nova_compute[243704]: 2025-12-13 04:24:43.292 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.732989824 +0000 UTC m=+0.601736459 container init bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.745015151 +0000 UTC m=+0.613761736 container start bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:24:43 compute-0 musing_kilby[275058]: 167 167
Dec 13 04:24:43 compute-0 systemd[1]: libpod-bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa.scope: Deactivated successfully.
Dec 13 04:24:43 compute-0 conmon[275058]: conmon bb006891831ea470757f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa.scope/container/memory.events
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.842436819 +0000 UTC m=+0.711183434 container attach bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.843669353 +0000 UTC m=+0.712415938 container died bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:24:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2040134084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2040134084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1656: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 13 KiB/s wr, 91 op/s
Dec 13 04:24:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2040134084' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2040134084' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-c866f093016dc030db39b386de214f35789b86b72606889cffc42601bb5495f9-merged.mount: Deactivated successfully.
Dec 13 04:24:43 compute-0 podman[275042]: 2025-12-13 04:24:43.971260941 +0000 UTC m=+0.840007496 container remove bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_kilby, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:43 compute-0 systemd[1]: libpod-conmon-bb006891831ea470757faf1ddfd17811003b012f230b003556ecece9f9774dfa.scope: Deactivated successfully.
Dec 13 04:24:44 compute-0 podman[275085]: 2025-12-13 04:24:44.167951938 +0000 UTC m=+0.056519807 container create 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:24:44 compute-0 podman[275085]: 2025-12-13 04:24:44.140932224 +0000 UTC m=+0.029500173 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:24:44 compute-0 systemd[1]: Started libpod-conmon-25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6.scope.
Dec 13 04:24:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7957592b8264d39940c79e1cd135de20e296e47a5ac48cf3c155efe005f051fb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7957592b8264d39940c79e1cd135de20e296e47a5ac48cf3c155efe005f051fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7957592b8264d39940c79e1cd135de20e296e47a5ac48cf3c155efe005f051fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7957592b8264d39940c79e1cd135de20e296e47a5ac48cf3c155efe005f051fb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:24:44 compute-0 podman[275085]: 2025-12-13 04:24:44.31993399 +0000 UTC m=+0.208501869 container init 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:24:44 compute-0 podman[275085]: 2025-12-13 04:24:44.329161241 +0000 UTC m=+0.217729100 container start 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:44 compute-0 podman[275085]: 2025-12-13 04:24:44.333504999 +0000 UTC m=+0.222072878 container attach 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:24:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:24:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.6 total, 600.0 interval
                                           Cumulative writes: 27K writes, 107K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 27K writes, 9948 syncs, 2.76 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 57K keys, 13K commit groups, 1.0 writes per commit group, ingest: 28.01 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5800 syncs, 2.33 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:24:44 compute-0 ceph-mon[75071]: pgmap v1656: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 13 KiB/s wr, 91 op/s
Dec 13 04:24:45 compute-0 lvm[275181]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:24:45 compute-0 lvm[275180]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:24:45 compute-0 lvm[275180]: VG ceph_vg0 finished
Dec 13 04:24:45 compute-0 lvm[275181]: VG ceph_vg1 finished
Dec 13 04:24:45 compute-0 lvm[275183]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:24:45 compute-0 lvm[275183]: VG ceph_vg2 finished
Dec 13 04:24:45 compute-0 jolly_golick[275102]: {}
Dec 13 04:24:45 compute-0 systemd[1]: libpod-25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6.scope: Deactivated successfully.
Dec 13 04:24:45 compute-0 podman[275085]: 2025-12-13 04:24:45.199531102 +0000 UTC m=+1.088098961 container died 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:24:45 compute-0 systemd[1]: libpod-25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6.scope: Consumed 1.407s CPU time.
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e435 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e435 do_prune osdmap full prune enabled
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e436 e436: 3 total, 3 up, 3 in
Dec 13 04:24:45 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e436: 3 total, 3 up, 3 in
Dec 13 04:24:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-7957592b8264d39940c79e1cd135de20e296e47a5ac48cf3c155efe005f051fb-merged.mount: Deactivated successfully.
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141811796' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141811796' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1658: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 13 KiB/s wr, 164 op/s
Dec 13 04:24:45 compute-0 podman[275085]: 2025-12-13 04:24:45.858081304 +0000 UTC m=+1.746649193 container remove 25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Dec 13 04:24:45 compute-0 systemd[1]: libpod-conmon-25db5630aded53948073037d3c3105117a16a811a53abbcfbeb44dc7c2ee6ef6.scope: Deactivated successfully.
Dec 13 04:24:45 compute-0 sudo[275005]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:24:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:24:45 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:45 compute-0 nova_compute[243704]: 2025-12-13 04:24:45.955 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:45.983 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:24:45 compute-0 nova_compute[243704]: 2025-12-13 04:24:45.984 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:45 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:45.986 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:24:46 compute-0 sudo[275199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:24:46 compute-0 sudo[275199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:24:46 compute-0 sudo[275199]: pam_unix(sudo:session): session closed for user root
Dec 13 04:24:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:46 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2019809614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:46 compute-0 ceph-mon[75071]: osdmap e436: 3 total, 3 up, 3 in
Dec 13 04:24:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2141811796' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2141811796' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:46 compute-0 ceph-mon[75071]: pgmap v1658: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 13 KiB/s wr, 164 op/s
Dec 13 04:24:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:24:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2019809614' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e436 do_prune osdmap full prune enabled
Dec 13 04:24:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e437 e437: 3 total, 3 up, 3 in
Dec 13 04:24:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e437: 3 total, 3 up, 3 in
Dec 13 04:24:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1660: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 3.4 KiB/s wr, 119 op/s
Dec 13 04:24:47 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:47.989 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:48 compute-0 nova_compute[243704]: 2025-12-13 04:24:48.295 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e437 do_prune osdmap full prune enabled
Dec 13 04:24:49 compute-0 ceph-mon[75071]: osdmap e437: 3 total, 3 up, 3 in
Dec 13 04:24:49 compute-0 ceph-mon[75071]: pgmap v1660: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 87 KiB/s rd, 3.4 KiB/s wr, 119 op/s
Dec 13 04:24:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e438 e438: 3 total, 3 up, 3 in
Dec 13 04:24:49 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e438: 3 total, 3 up, 3 in
Dec 13 04:24:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1662: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 5.5 KiB/s wr, 147 op/s
Dec 13 04:24:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:24:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.6 total, 600.0 interval
                                           Cumulative writes: 20K writes, 84K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 7076 syncs, 2.88 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 29.45 MB, 0.05 MB/s
                                           Interval WAL: 10K writes, 4189 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:24:50 compute-0 ceph-mon[75071]: osdmap e438: 3 total, 3 up, 3 in
Dec 13 04:24:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e438 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e438 do_prune osdmap full prune enabled
Dec 13 04:24:50 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e439 e439: 3 total, 3 up, 3 in
Dec 13 04:24:50 compute-0 nova_compute[243704]: 2025-12-13 04:24:50.978 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:50 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e439: 3 total, 3 up, 3 in
Dec 13 04:24:51 compute-0 ceph-mon[75071]: pgmap v1662: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 5.5 KiB/s wr, 147 op/s
Dec 13 04:24:51 compute-0 ceph-mon[75071]: osdmap e439: 3 total, 3 up, 3 in
Dec 13 04:24:51 compute-0 ovn_controller[145204]: 2025-12-13T04:24:51Z|00234|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec 13 04:24:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1664: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Dec 13 04:24:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e439 do_prune osdmap full prune enabled
Dec 13 04:24:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e440 e440: 3 total, 3 up, 3 in
Dec 13 04:24:51 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e440: 3 total, 3 up, 3 in
Dec 13 04:24:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641526681' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/641526681' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.725136529050969e-06 of space, bias 1.0, pg target 0.0020175409587152907 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0031705665085366474 of space, bias 1.0, pg target 0.9511699525609942 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 3.380771415140617e-06 of space, bias 1.0, pg target 0.001014231424542185 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666557315489336 of space, bias 1.0, pg target 0.19999671946468006 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 8.619055971857424e-07 of space, bias 4.0, pg target 0.001034286716622891 quantized to 16 (current 16)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:24:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:24:52 compute-0 ceph-mon[75071]: pgmap v1664: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Dec 13 04:24:52 compute-0 ceph-mon[75071]: osdmap e440: 3 total, 3 up, 3 in
Dec 13 04:24:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/641526681' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/641526681' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:53 compute-0 nova_compute[243704]: 2025-12-13 04:24:53.343 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1666: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Dec 13 04:24:53 compute-0 podman[275224]: 2025-12-13 04:24:53.934632729 +0000 UTC m=+0.071690650 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.210 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.210 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.211 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.211 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.211 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.213 243708 INFO nova.compute.manager [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Terminating instance
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.214 243708 DEBUG nova.compute.manager [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:24:54 compute-0 kernel: tap65ae7c69-37 (unregistering): left promiscuous mode
Dec 13 04:24:54 compute-0 NetworkManager[48899]: <info>  [1765599894.2665] device (tap65ae7c69-37): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:24:54 compute-0 ovn_controller[145204]: 2025-12-13T04:24:54Z|00235|binding|INFO|Releasing lport 65ae7c69-37db-4deb-9754-9061175558c0 from this chassis (sb_readonly=0)
Dec 13 04:24:54 compute-0 ovn_controller[145204]: 2025-12-13T04:24:54Z|00236|binding|INFO|Setting lport 65ae7c69-37db-4deb-9754-9061175558c0 down in Southbound
Dec 13 04:24:54 compute-0 ovn_controller[145204]: 2025-12-13T04:24:54Z|00237|binding|INFO|Removing iface tap65ae7c69-37 ovn-installed in OVS
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.282 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.290 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e5:29:8f 10.100.0.14'], port_security=['fa:16:3e:e5:29:8f 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '13d79491-8168-41b3-9d61-0763591f79a4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3ad8ea73576b4cf9aad3a876effca617', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b35a742a-b386-4310-84f0-5826a0beab45', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3450aaa3-6969-42ec-bd5e-da6d6d1d73eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=65ae7c69-37db-4deb-9754-9061175558c0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.292 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 65ae7c69-37db-4deb-9754-9061175558c0 in datapath 87c0a2c3-5f67-431b-9b32-a688ddc2bc06 unbound from our chassis
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.295 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 87c0a2c3-5f67-431b-9b32-a688ddc2bc06, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.296 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8ed39da8-1ea0-4bb3-90fb-97650668b222]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.297 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 namespace which is not needed anymore
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.307 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Deactivated successfully.
Dec 13 04:24:54 compute-0 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000019.scope: Consumed 17.295s CPU time.
Dec 13 04:24:54 compute-0 systemd-machined[206767]: Machine qemu-25-instance-00000019 terminated.
Dec 13 04:24:54 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [NOTICE]   (274482) : haproxy version is 2.8.14-c23fe91
Dec 13 04:24:54 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [NOTICE]   (274482) : path to executable is /usr/sbin/haproxy
Dec 13 04:24:54 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [WARNING]  (274482) : Exiting Master process...
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.467 243708 INFO nova.virt.libvirt.driver [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Instance destroyed successfully.
Dec 13 04:24:54 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [ALERT]    (274482) : Current worker (274484) exited with code 143 (Terminated)
Dec 13 04:24:54 compute-0 neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06[274478]: [WARNING]  (274482) : All workers exited. Exiting... (0)
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.468 243708 DEBUG nova.objects.instance [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lazy-loading 'resources' on Instance uuid 13d79491-8168-41b3-9d61-0763591f79a4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:24:54 compute-0 systemd[1]: libpod-5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67.scope: Deactivated successfully.
Dec 13 04:24:54 compute-0 podman[275268]: 2025-12-13 04:24:54.477502876 +0000 UTC m=+0.059866478 container died 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.478 243708 DEBUG nova.virt.libvirt.vif [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:24:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestEncryptedCinderVolumes-server-1423405409',display_name='tempest-TestEncryptedCinderVolumes-server-1423405409',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testencryptedcindervolumes-server-1423405409',id=25,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAMTrppdqhZziQNVB9Yq1F80y48wl+jU8sk3cqpAQJlLHhl2ENknmHCD+TKy3c6EN4z48W8grnbaalYAFotzA564ZRGtO7sXcHNuoXeibeaRHuK7Hykbbohr7xM96Xy2QA==',key_name='tempest-TestEncryptedCinderVolumes-1174624010',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:24:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3ad8ea73576b4cf9aad3a876effca617',ramdisk_id='',reservation_id='r-drvoszv0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TestEncryptedCinderVolumes-1691115809',owner_user_name='tempest-TestEncryptedCinderVolumes-1691115809-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:24:14Z,user_data=None,user_id='439e16bdacdd484cbdfe5b2ff762e327',uuid=13d79491-8168-41b3-9d61-0763591f79a4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.479 243708 DEBUG nova.network.os_vif_util [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converting VIF {"id": "65ae7c69-37db-4deb-9754-9061175558c0", "address": "fa:16:3e:e5:29:8f", "network": {"id": "87c0a2c3-5f67-431b-9b32-a688ddc2bc06", "bridge": "br-int", "label": "tempest-TestEncryptedCinderVolumes-1275533184-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3ad8ea73576b4cf9aad3a876effca617", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap65ae7c69-37", "ovs_interfaceid": "65ae7c69-37db-4deb-9754-9061175558c0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.480 243708 DEBUG nova.network.os_vif_util [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.480 243708 DEBUG os_vif [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.482 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.482 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap65ae7c69-37, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.484 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.490 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.492 243708 INFO os_vif [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e5:29:8f,bridge_name='br-int',has_traffic_filtering=True,id=65ae7c69-37db-4deb-9754-9061175558c0,network=Network(87c0a2c3-5f67-431b-9b32-a688ddc2bc06),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap65ae7c69-37')
Dec 13 04:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67-userdata-shm.mount: Deactivated successfully.
Dec 13 04:24:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-536745ca821d8ddeb17dd61872f6d766a7f6321621995163ecf7512b2fbab6f3-merged.mount: Deactivated successfully.
Dec 13 04:24:54 compute-0 podman[275268]: 2025-12-13 04:24:54.527266939 +0000 UTC m=+0.109630551 container cleanup 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:24:54 compute-0 systemd[1]: libpod-conmon-5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67.scope: Deactivated successfully.
Dec 13 04:24:54 compute-0 podman[275324]: 2025-12-13 04:24:54.605105675 +0000 UTC m=+0.050798011 container remove 5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.614 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2281f9f9-2920-4c1b-babb-e7c75d9e5bd6]: (4, ('Sat Dec 13 04:24:54 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67)\n5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67\nSat Dec 13 04:24:54 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 (5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67)\n5d7423bbae245560f1bd94e1bd57599fa60dc07d3a9dfe6ba21455035be49e67\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.616 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdea596-4d9a-473e-8d73-567c993bd33a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.617 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87c0a2c3-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.618 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 kernel: tap87c0a2c3-50: left promiscuous mode
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.623 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5b37f32e-5d3e-4a7e-b235-4c2dbfd4a42f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.638 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.642 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b0509756-1b03-474e-a054-82d47c50339b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.643 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[647f8326-6fb9-40ad-99ec-59e2a86f5dd2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.659 243708 INFO nova.virt.libvirt.driver [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Deleting instance files /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4_del
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.660 243708 INFO nova.virt.libvirt.driver [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Deletion of /var/lib/nova/instances/13d79491-8168-41b3-9d61-0763591f79a4_del complete
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.659 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[757fb071-ef8f-4226-9ded-40a000e49bd5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 459036, 'reachable_time': 16444, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275342, 'error': None, 'target': 'ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d87c0a2c3\x2d5f67\x2d431b\x2d9b32\x2da688ddc2bc06.mount: Deactivated successfully.
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.663 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-87c0a2c3-5f67-431b-9b32-a688ddc2bc06 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:24:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:24:54.664 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[04f70b8b-e5c4-424d-aaa3-9fb10a5e10e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.719 243708 INFO nova.compute.manager [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Took 0.50 seconds to destroy the instance on the hypervisor.
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.720 243708 DEBUG oslo.service.loopingcall [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.721 243708 DEBUG nova.compute.manager [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.721 243708 DEBUG nova.network.neutron [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.755 243708 DEBUG nova.compute.manager [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-unplugged-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.755 243708 DEBUG oslo_concurrency.lockutils [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.756 243708 DEBUG oslo_concurrency.lockutils [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.756 243708 DEBUG oslo_concurrency.lockutils [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.756 243708 DEBUG nova.compute.manager [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] No waiting events found dispatching network-vif-unplugged-65ae7c69-37db-4deb-9754-9061175558c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:24:54 compute-0 nova_compute[243704]: 2025-12-13 04:24:54.757 243708 DEBUG nova.compute.manager [req-77006f9a-41a0-4b07-b17d-651eb970a67f req-e934dcaf-765a-4182-b153-a740d35875e2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-unplugged-65ae7c69-37db-4deb-9754-9061175558c0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:24:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e440 do_prune osdmap full prune enabled
Dec 13 04:24:54 compute-0 ceph-mon[75071]: pgmap v1666: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 4.5 KiB/s wr, 27 op/s
Dec 13 04:24:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e441 e441: 3 total, 3 up, 3 in
Dec 13 04:24:55 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e441: 3 total, 3 up, 3 in
Dec 13 04:24:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:24:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1980504322' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.685 243708 DEBUG nova.network.neutron [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.712 243708 INFO nova.compute.manager [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Took 0.99 seconds to deallocate network for instance.
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.771 243708 DEBUG nova.compute.manager [req-27f5be22-9130-4e68-a0ea-7cb324a6f0e9 req-189c3bb5-778d-4ac3-aa72-6c5200e29e7c 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-deleted-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1668: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 512 KiB/s rd, 8.7 KiB/s wr, 136 op/s
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.880 243708 INFO nova.compute.manager [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Took 0.17 seconds to detach 1 volumes for instance.
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.932 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.933 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e441 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:24:55 compute-0 nova_compute[243704]: 2025-12-13 04:24:55.995 243708 DEBUG oslo_concurrency.processutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:24:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e441 do_prune osdmap full prune enabled
Dec 13 04:24:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e442 e442: 3 total, 3 up, 3 in
Dec 13 04:24:56 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e442: 3 total, 3 up, 3 in
Dec 13 04:24:56 compute-0 ceph-mon[75071]: osdmap e441: 3 total, 3 up, 3 in
Dec 13 04:24:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1980504322' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:24:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:24:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2731453645' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.588 243708 DEBUG oslo_concurrency.processutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.593 243708 DEBUG nova.compute.provider_tree [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.605 243708 DEBUG nova.scheduler.client.report [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.619 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.639 243708 INFO nova.scheduler.client.report [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Deleted allocations for instance 13d79491-8168-41b3-9d61-0763591f79a4
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.705 243708 DEBUG oslo_concurrency.lockutils [None req-22c94025-904c-4854-bd45-561eae94b034 439e16bdacdd484cbdfe5b2ff762e327 3ad8ea73576b4cf9aad3a876effca617 - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.866 243708 DEBUG nova.compute.manager [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.866 243708 DEBUG oslo_concurrency.lockutils [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "13d79491-8168-41b3-9d61-0763591f79a4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.867 243708 DEBUG oslo_concurrency.lockutils [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.867 243708 DEBUG oslo_concurrency.lockutils [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "13d79491-8168-41b3-9d61-0763591f79a4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.867 243708 DEBUG nova.compute.manager [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] No waiting events found dispatching network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:24:56 compute-0 nova_compute[243704]: 2025-12-13 04:24:56.868 243708 WARNING nova.compute.manager [req-4dd85483-b892-4b43-96fb-49cc6d602104 req-7134420f-813d-44c5-b216-7518cd662044 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Received unexpected event network-vif-plugged-65ae7c69-37db-4deb-9754-9061175558c0 for instance with vm_state deleted and task_state None.
Dec 13 04:24:56 compute-0 podman[275366]: 2025-12-13 04:24:56.923847779 +0000 UTC m=+0.073044486 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:24:57 compute-0 ceph-mon[75071]: pgmap v1668: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 512 KiB/s rd, 8.7 KiB/s wr, 136 op/s
Dec 13 04:24:57 compute-0 ceph-mon[75071]: osdmap e442: 3 total, 3 up, 3 in
Dec 13 04:24:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2731453645' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:24:57 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 04:24:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1670: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.2 KiB/s wr, 132 op/s
Dec 13 04:24:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e442 do_prune osdmap full prune enabled
Dec 13 04:24:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e443 e443: 3 total, 3 up, 3 in
Dec 13 04:24:58 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e443: 3 total, 3 up, 3 in
Dec 13 04:24:58 compute-0 nova_compute[243704]: 2025-12-13 04:24:58.386 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e443 do_prune osdmap full prune enabled
Dec 13 04:24:59 compute-0 ceph-mon[75071]: pgmap v1670: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 6.2 KiB/s wr, 132 op/s
Dec 13 04:24:59 compute-0 ceph-mon[75071]: osdmap e443: 3 total, 3 up, 3 in
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e444 e444: 3 total, 3 up, 3 in
Dec 13 04:24:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e444: 3 total, 3 up, 3 in
Dec 13 04:24:59 compute-0 nova_compute[243704]: 2025-12-13 04:24:59.485 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558051939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1558051939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:24:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957069618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:24:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:24:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2957069618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:24:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1673: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 6.6 KiB/s wr, 112 op/s
Dec 13 04:25:00 compute-0 ceph-mon[75071]: osdmap e444: 3 total, 3 up, 3 in
Dec 13 04:25:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1558051939' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1558051939' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2957069618' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2957069618' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434608904' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2434608904' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e444 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e444 do_prune osdmap full prune enabled
Dec 13 04:25:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e445 e445: 3 total, 3 up, 3 in
Dec 13 04:25:00 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e445: 3 total, 3 up, 3 in
Dec 13 04:25:01 compute-0 ceph-mon[75071]: pgmap v1673: 305 pgs: 305 active+clean; 287 MiB data, 576 MiB used, 59 GiB / 60 GiB avail; 447 KiB/s rd, 6.6 KiB/s wr, 112 op/s
Dec 13 04:25:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2434608904' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2434608904' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:01 compute-0 ceph-mon[75071]: osdmap e445: 3 total, 3 up, 3 in
Dec 13 04:25:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1675: 305 pgs: 305 active+clean; 279 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 8.4 KiB/s wr, 231 op/s
Dec 13 04:25:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e445 do_prune osdmap full prune enabled
Dec 13 04:25:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e446 e446: 3 total, 3 up, 3 in
Dec 13 04:25:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e446: 3 total, 3 up, 3 in
Dec 13 04:25:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e446 do_prune osdmap full prune enabled
Dec 13 04:25:03 compute-0 ceph-mon[75071]: pgmap v1675: 305 pgs: 305 active+clean; 279 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 182 KiB/s rd, 8.4 KiB/s wr, 231 op/s
Dec 13 04:25:03 compute-0 ceph-mon[75071]: osdmap e446: 3 total, 3 up, 3 in
Dec 13 04:25:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e447 e447: 3 total, 3 up, 3 in
Dec 13 04:25:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e447: 3 total, 3 up, 3 in
Dec 13 04:25:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2671585098' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2671585098' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:03 compute-0 nova_compute[243704]: 2025-12-13 04:25:03.432 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1678: 305 pgs: 305 active+clean; 279 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 8.9 KiB/s wr, 263 op/s
Dec 13 04:25:04 compute-0 ceph-mon[75071]: osdmap e447: 3 total, 3 up, 3 in
Dec 13 04:25:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2671585098' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2671585098' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:04 compute-0 nova_compute[243704]: 2025-12-13 04:25:04.221 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:04 compute-0 nova_compute[243704]: 2025-12-13 04:25:04.412 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:04 compute-0 nova_compute[243704]: 2025-12-13 04:25:04.488 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e447 do_prune osdmap full prune enabled
Dec 13 04:25:05 compute-0 ceph-mon[75071]: pgmap v1678: 305 pgs: 305 active+clean; 279 MiB data, 572 MiB used, 59 GiB / 60 GiB avail; 210 KiB/s rd, 8.9 KiB/s wr, 263 op/s
Dec 13 04:25:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e448 e448: 3 total, 3 up, 3 in
Dec 13 04:25:05 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e448: 3 total, 3 up, 3 in
Dec 13 04:25:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122233336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1122233336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1680: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 6.8 KiB/s wr, 308 op/s
Dec 13 04:25:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e448 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e448 do_prune osdmap full prune enabled
Dec 13 04:25:06 compute-0 ceph-mon[75071]: osdmap e448: 3 total, 3 up, 3 in
Dec 13 04:25:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1122233336' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1122233336' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e449 e449: 3 total, 3 up, 3 in
Dec 13 04:25:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e449: 3 total, 3 up, 3 in
Dec 13 04:25:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979043290' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3979043290' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e449 do_prune osdmap full prune enabled
Dec 13 04:25:07 compute-0 ceph-mon[75071]: pgmap v1680: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 237 KiB/s rd, 6.8 KiB/s wr, 308 op/s
Dec 13 04:25:07 compute-0 ceph-mon[75071]: osdmap e449: 3 total, 3 up, 3 in
Dec 13 04:25:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3979043290' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3979043290' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e450 e450: 3 total, 3 up, 3 in
Dec 13 04:25:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e450: 3 total, 3 up, 3 in
Dec 13 04:25:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882569640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1882569640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1683: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 6.9 KiB/s wr, 316 op/s
Dec 13 04:25:08 compute-0 nova_compute[243704]: 2025-12-13 04:25:08.470 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:08 compute-0 ceph-mon[75071]: osdmap e450: 3 total, 3 up, 3 in
Dec 13 04:25:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1882569640' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1882569640' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302485569' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302485569' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:09 compute-0 nova_compute[243704]: 2025-12-13 04:25:09.465 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765599894.4637907, 13d79491-8168-41b3-9d61-0763591f79a4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:25:09 compute-0 nova_compute[243704]: 2025-12-13 04:25:09.466 243708 INFO nova.compute.manager [-] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] VM Stopped (Lifecycle Event)
Dec 13 04:25:09 compute-0 nova_compute[243704]: 2025-12-13 04:25:09.482 243708 DEBUG nova.compute.manager [None req-8b5f2c5c-0e91-411f-9712-2f3733d37487 - - - - - -] [instance: 13d79491-8168-41b3-9d61-0763591f79a4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:25:09 compute-0 ceph-mon[75071]: pgmap v1683: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 243 KiB/s rd, 6.9 KiB/s wr, 316 op/s
Dec 13 04:25:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2302485569' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2302485569' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:09 compute-0 nova_compute[243704]: 2025-12-13 04:25:09.491 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1684: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 6.8 KiB/s wr, 312 op/s
Dec 13 04:25:10 compute-0 ceph-mon[75071]: pgmap v1684: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 238 KiB/s rd, 6.8 KiB/s wr, 312 op/s
Dec 13 04:25:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e450 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e450 do_prune osdmap full prune enabled
Dec 13 04:25:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e451 e451: 3 total, 3 up, 3 in
Dec 13 04:25:10 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e451: 3 total, 3 up, 3 in
Dec 13 04:25:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1686: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 5.2 KiB/s wr, 178 op/s
Dec 13 04:25:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e451 do_prune osdmap full prune enabled
Dec 13 04:25:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e452 e452: 3 total, 3 up, 3 in
Dec 13 04:25:11 compute-0 ceph-mon[75071]: osdmap e451: 3 total, 3 up, 3 in
Dec 13 04:25:11 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e452: 3 total, 3 up, 3 in
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:12 compute-0 ceph-mon[75071]: pgmap v1686: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 136 KiB/s rd, 5.2 KiB/s wr, 178 op/s
Dec 13 04:25:12 compute-0 ceph-mon[75071]: osdmap e452: 3 total, 3 up, 3 in
Dec 13 04:25:13 compute-0 podman[275388]: 2025-12-13 04:25:13.032072511 +0000 UTC m=+0.168415309 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:25:13 compute-0 nova_compute[243704]: 2025-12-13 04:25:13.473 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:25:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4060746452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1688: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 4.9 KiB/s wr, 167 op/s
Dec 13 04:25:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4060746452' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:14 compute-0 nova_compute[243704]: 2025-12-13 04:25:14.493 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e452 do_prune osdmap full prune enabled
Dec 13 04:25:15 compute-0 ceph-mon[75071]: pgmap v1688: 305 pgs: 305 active+clean; 271 MiB data, 560 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 4.9 KiB/s wr, 167 op/s
Dec 13 04:25:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e453 e453: 3 total, 3 up, 3 in
Dec 13 04:25:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e453: 3 total, 3 up, 3 in
Dec 13 04:25:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1690: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 8.3 KiB/s wr, 170 op/s
Dec 13 04:25:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e453 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e453 do_prune osdmap full prune enabled
Dec 13 04:25:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e454 e454: 3 total, 3 up, 3 in
Dec 13 04:25:15 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e454: 3 total, 3 up, 3 in
Dec 13 04:25:16 compute-0 ceph-mon[75071]: osdmap e453: 3 total, 3 up, 3 in
Dec 13 04:25:16 compute-0 ceph-mon[75071]: osdmap e454: 3 total, 3 up, 3 in
Dec 13 04:25:17 compute-0 ceph-mon[75071]: pgmap v1690: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 127 KiB/s rd, 8.3 KiB/s wr, 170 op/s
Dec 13 04:25:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1692: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 4.5 KiB/s wr, 52 op/s
Dec 13 04:25:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e454 do_prune osdmap full prune enabled
Dec 13 04:25:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e455 e455: 3 total, 3 up, 3 in
Dec 13 04:25:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e455: 3 total, 3 up, 3 in
Dec 13 04:25:18 compute-0 nova_compute[243704]: 2025-12-13 04:25:18.475 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:19 compute-0 ceph-mon[75071]: pgmap v1692: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 4.5 KiB/s wr, 52 op/s
Dec 13 04:25:19 compute-0 ceph-mon[75071]: osdmap e455: 3 total, 3 up, 3 in
Dec 13 04:25:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e455 do_prune osdmap full prune enabled
Dec 13 04:25:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e456 e456: 3 total, 3 up, 3 in
Dec 13 04:25:19 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e456: 3 total, 3 up, 3 in
Dec 13 04:25:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304382556' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:19 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/304382556' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:19 compute-0 nova_compute[243704]: 2025-12-13 04:25:19.495 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1695: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 6.8 KiB/s wr, 195 op/s
Dec 13 04:25:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e456 do_prune osdmap full prune enabled
Dec 13 04:25:20 compute-0 ceph-mon[75071]: osdmap e456: 3 total, 3 up, 3 in
Dec 13 04:25:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/304382556' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/304382556' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e457 e457: 3 total, 3 up, 3 in
Dec 13 04:25:20 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e457: 3 total, 3 up, 3 in
Dec 13 04:25:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1341544052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1341544052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e457 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e457 do_prune osdmap full prune enabled
Dec 13 04:25:21 compute-0 ceph-mon[75071]: pgmap v1695: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 6.8 KiB/s wr, 195 op/s
Dec 13 04:25:21 compute-0 ceph-mon[75071]: osdmap e457: 3 total, 3 up, 3 in
Dec 13 04:25:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1341544052' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1341544052' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e458 e458: 3 total, 3 up, 3 in
Dec 13 04:25:21 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e458: 3 total, 3 up, 3 in
Dec 13 04:25:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779776502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3779776502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1698: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 11 KiB/s wr, 283 op/s
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.890 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.890 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.910 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.910 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.910 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.911 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:25:21 compute-0 nova_compute[243704]: 2025-12-13 04:25:21.911 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:25:22 compute-0 ceph-mon[75071]: osdmap e458: 3 total, 3 up, 3 in
Dec 13 04:25:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3779776502' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3779776502' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/456521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.477 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:25:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490207036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490207036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.633 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.634 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4324MB free_disk=59.98804045561701GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.635 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.635 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.697 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.698 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:25:22 compute-0 nova_compute[243704]: 2025-12-13 04:25:22.716 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:25:23 compute-0 ceph-mon[75071]: pgmap v1698: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 216 KiB/s rd, 11 KiB/s wr, 283 op/s
Dec 13 04:25:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/456521' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1490207036' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1490207036' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:25:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3655634515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:23 compute-0 nova_compute[243704]: 2025-12-13 04:25:23.245 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:25:23 compute-0 nova_compute[243704]: 2025-12-13 04:25:23.250 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:25:23 compute-0 nova_compute[243704]: 2025-12-13 04:25:23.262 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:25:23 compute-0 nova_compute[243704]: 2025-12-13 04:25:23.504 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1699: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 7.9 KiB/s wr, 194 op/s
Dec 13 04:25:24 compute-0 nova_compute[243704]: 2025-12-13 04:25:24.106 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:25:24 compute-0 nova_compute[243704]: 2025-12-13 04:25:24.107 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.472s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:25:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3655634515' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:25:24 compute-0 nova_compute[243704]: 2025-12-13 04:25:24.498 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:24 compute-0 podman[275459]: 2025-12-13 04:25:24.901922675 +0000 UTC m=+0.052000105 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:25:25 compute-0 nova_compute[243704]: 2025-12-13 04:25:25.094 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:25 compute-0 nova_compute[243704]: 2025-12-13 04:25:25.095 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:25 compute-0 nova_compute[243704]: 2025-12-13 04:25:25.095 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:25 compute-0 nova_compute[243704]: 2025-12-13 04:25:25.096 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:25 compute-0 ceph-mon[75071]: pgmap v1699: 305 pgs: 2 active+clean+snaptrim, 1 active+clean+snaptrim_wait, 302 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 148 KiB/s rd, 7.9 KiB/s wr, 194 op/s
Dec 13 04:25:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1700: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 177 KiB/s rd, 6.7 KiB/s wr, 233 op/s
Dec 13 04:25:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e458 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e458 do_prune osdmap full prune enabled
Dec 13 04:25:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e459 e459: 3 total, 3 up, 3 in
Dec 13 04:25:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e459: 3 total, 3 up, 3 in
Dec 13 04:25:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:25:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396660278' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e459 do_prune osdmap full prune enabled
Dec 13 04:25:26 compute-0 ceph-mon[75071]: pgmap v1700: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 177 KiB/s rd, 6.7 KiB/s wr, 233 op/s
Dec 13 04:25:26 compute-0 ceph-mon[75071]: osdmap e459: 3 total, 3 up, 3 in
Dec 13 04:25:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/396660278' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:25:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e460 e460: 3 total, 3 up, 3 in
Dec 13 04:25:27 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e460: 3 total, 3 up, 3 in
Dec 13 04:25:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1703: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 2.8 KiB/s wr, 146 op/s
Dec 13 04:25:27 compute-0 nova_compute[243704]: 2025-12-13 04:25:27.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:27 compute-0 podman[275481]: 2025-12-13 04:25:27.925400986 +0000 UTC m=+0.065759808 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible)
Dec 13 04:25:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e460 do_prune osdmap full prune enabled
Dec 13 04:25:28 compute-0 ceph-mon[75071]: osdmap e460: 3 total, 3 up, 3 in
Dec 13 04:25:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e461 e461: 3 total, 3 up, 3 in
Dec 13 04:25:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e461: 3 total, 3 up, 3 in
Dec 13 04:25:28 compute-0 nova_compute[243704]: 2025-12-13 04:25:28.542 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e461 do_prune osdmap full prune enabled
Dec 13 04:25:29 compute-0 ceph-mon[75071]: pgmap v1703: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 114 KiB/s rd, 2.8 KiB/s wr, 146 op/s
Dec 13 04:25:29 compute-0 ceph-mon[75071]: osdmap e461: 3 total, 3 up, 3 in
Dec 13 04:25:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e462 e462: 3 total, 3 up, 3 in
Dec 13 04:25:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e462: 3 total, 3 up, 3 in
Dec 13 04:25:29 compute-0 nova_compute[243704]: 2025-12-13 04:25:29.501 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1706: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.7 KiB/s wr, 23 op/s
Dec 13 04:25:30 compute-0 ceph-mon[75071]: osdmap e462: 3 total, 3 up, 3 in
Dec 13 04:25:30 compute-0 nova_compute[243704]: 2025-12-13 04:25:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:25:30 compute-0 nova_compute[243704]: 2025-12-13 04:25:30.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:25:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e462 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e462 do_prune osdmap full prune enabled
Dec 13 04:25:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e463 e463: 3 total, 3 up, 3 in
Dec 13 04:25:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e463: 3 total, 3 up, 3 in
Dec 13 04:25:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1708: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.0 KiB/s wr, 77 op/s
Dec 13 04:25:32 compute-0 ceph-mon[75071]: pgmap v1706: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.7 KiB/s wr, 23 op/s
Dec 13 04:25:32 compute-0 ceph-mon[75071]: osdmap e463: 3 total, 3 up, 3 in
Dec 13 04:25:33 compute-0 ceph-mon[75071]: pgmap v1708: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 6.0 KiB/s wr, 77 op/s
Dec 13 04:25:33 compute-0 nova_compute[243704]: 2025-12-13 04:25:33.545 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1709: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 4.8 KiB/s wr, 62 op/s
Dec 13 04:25:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e463 do_prune osdmap full prune enabled
Dec 13 04:25:34 compute-0 nova_compute[243704]: 2025-12-13 04:25:34.503 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e464 e464: 3 total, 3 up, 3 in
Dec 13 04:25:34 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e464: 3 total, 3 up, 3 in
Dec 13 04:25:34 compute-0 ceph-mon[75071]: pgmap v1709: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 4.8 KiB/s wr, 62 op/s
Dec 13 04:25:34 compute-0 ceph-mon[75071]: osdmap e464: 3 total, 3 up, 3 in
Dec 13 04:25:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:35.101 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:25:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:35.102 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:25:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:35.102 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:25:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1711: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 5.0 KiB/s wr, 81 op/s
Dec 13 04:25:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e464 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:37 compute-0 ceph-mon[75071]: pgmap v1711: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 5.0 KiB/s wr, 81 op/s
Dec 13 04:25:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1712: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Dec 13 04:25:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e464 do_prune osdmap full prune enabled
Dec 13 04:25:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e465 e465: 3 total, 3 up, 3 in
Dec 13 04:25:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e465: 3 total, 3 up, 3 in
Dec 13 04:25:38 compute-0 nova_compute[243704]: 2025-12-13 04:25:38.581 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:39 compute-0 ceph-mon[75071]: pgmap v1712: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.0 KiB/s wr, 61 op/s
Dec 13 04:25:39 compute-0 ceph-mon[75071]: osdmap e465: 3 total, 3 up, 3 in
Dec 13 04:25:39 compute-0 ovn_controller[145204]: 2025-12-13T04:25:39Z|00238|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Dec 13 04:25:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141236019' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2141236019' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:39 compute-0 nova_compute[243704]: 2025-12-13 04:25:39.504 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1714: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Dec 13 04:25:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2376871148' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2376871148' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2141236019' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2141236019' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2376871148' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2376871148' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:25:40
Dec 13 04:25:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:25:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:25:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', 'volumes']
Dec 13 04:25:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:25:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e465 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:41 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e465 do_prune osdmap full prune enabled
Dec 13 04:25:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1715: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 3.4 KiB/s wr, 122 op/s
Dec 13 04:25:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e466 e466: 3 total, 3 up, 3 in
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:42 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e466: 3 total, 3 up, 3 in
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:25:42 compute-0 ceph-mon[75071]: pgmap v1714: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.4 KiB/s wr, 49 op/s
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:25:42 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:25:43 compute-0 ceph-mon[75071]: pgmap v1715: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 3.4 KiB/s wr, 122 op/s
Dec 13 04:25:43 compute-0 ceph-mon[75071]: osdmap e466: 3 total, 3 up, 3 in
Dec 13 04:25:43 compute-0 nova_compute[243704]: 2025-12-13 04:25:43.629 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1717: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 2.1 KiB/s wr, 96 op/s
Dec 13 04:25:43 compute-0 podman[275503]: 2025-12-13 04:25:43.944570767 +0000 UTC m=+0.083001418 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:25:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e466 do_prune osdmap full prune enabled
Dec 13 04:25:44 compute-0 nova_compute[243704]: 2025-12-13 04:25:44.506 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e467 e467: 3 total, 3 up, 3 in
Dec 13 04:25:44 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e467: 3 total, 3 up, 3 in
Dec 13 04:25:45 compute-0 ceph-mon[75071]: pgmap v1717: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 2.1 KiB/s wr, 96 op/s
Dec 13 04:25:45 compute-0 ceph-mon[75071]: osdmap e467: 3 total, 3 up, 3 in
Dec 13 04:25:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/912978867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/912978867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1719: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 3.2 KiB/s wr, 111 op/s
Dec 13 04:25:46 compute-0 sudo[275527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:25:46 compute-0 sudo[275527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:46 compute-0 sudo[275527]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:46 compute-0 sudo[275552]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 04:25:46 compute-0 sudo[275552]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:46 compute-0 sudo[275552]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:25:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:46 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:25:46 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:46 compute-0 sudo[275598]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:25:46 compute-0 sudo[275598]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:46 compute-0 sudo[275598]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:46 compute-0 sudo[275623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:25:46 compute-0 sudo[275623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/912978867' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/912978867' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:46 compute-0 ceph-mon[75071]: pgmap v1719: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 3.2 KiB/s wr, 111 op/s
Dec 13 04:25:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:46 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3124989759' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3124989759' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e467 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e467 do_prune osdmap full prune enabled
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e468 e468: 3 total, 3 up, 3 in
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e468: 3 total, 3 up, 3 in
Dec 13 04:25:47 compute-0 sudo[275623]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:47 compute-0 sudo[275679]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:25:47 compute-0 sudo[275679]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:47 compute-0 sudo[275679]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:47 compute-0 sudo[275704]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:25:47 compute-0 sudo[275704]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.83038764 +0000 UTC m=+0.074596819 container create 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:25:47 compute-0 systemd[1]: Started libpod-conmon-380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4.scope.
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.793496696 +0000 UTC m=+0.037705965 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1721: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Dec 13 04:25:47 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.920752206 +0000 UTC m=+0.164961415 container init 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3124989759' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3124989759' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: osdmap e468: 3 total, 3 up, 3 in
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:25:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.928855857 +0000 UTC m=+0.173065036 container start 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.932831094 +0000 UTC m=+0.177040303 container attach 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:25:47 compute-0 gallant_ardinghelli[275759]: 167 167
Dec 13 04:25:47 compute-0 systemd[1]: libpod-380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4.scope: Deactivated successfully.
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.939116375 +0000 UTC m=+0.183325594 container died 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:25:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d0ec58c94f93112f727ef7472e047f805c76a42729ec61c8fb5700b8b5b97d2-merged.mount: Deactivated successfully.
Dec 13 04:25:47 compute-0 podman[275743]: 2025-12-13 04:25:47.978117395 +0000 UTC m=+0.222326574 container remove 380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:25:47 compute-0 systemd[1]: libpod-conmon-380eaa01e88160008c53615b0aab1fe8a697bba50481ae675ad2a61aaea06af4.scope: Deactivated successfully.
Dec 13 04:25:48 compute-0 podman[275783]: 2025-12-13 04:25:48.173782435 +0000 UTC m=+0.050322309 container create 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:25:48 compute-0 systemd[1]: Started libpod-conmon-90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953.scope.
Dec 13 04:25:48 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:48 compute-0 podman[275783]: 2025-12-13 04:25:48.154003387 +0000 UTC m=+0.030543281 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:48 compute-0 podman[275783]: 2025-12-13 04:25:48.265145159 +0000 UTC m=+0.141685043 container init 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:48 compute-0 podman[275783]: 2025-12-13 04:25:48.280185527 +0000 UTC m=+0.156725401 container start 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:25:48 compute-0 podman[275783]: 2025-12-13 04:25:48.284755742 +0000 UTC m=+0.161295646 container attach 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:25:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1111531141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1111531141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:48 compute-0 nova_compute[243704]: 2025-12-13 04:25:48.633 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:48 compute-0 jovial_carson[275800]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:25:48 compute-0 jovial_carson[275800]: --> All data devices are unavailable
Dec 13 04:25:48 compute-0 systemd[1]: libpod-90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953.scope: Deactivated successfully.
Dec 13 04:25:48 compute-0 podman[275820]: 2025-12-13 04:25:48.835110303 +0000 UTC m=+0.024543308 container died 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:25:49 compute-0 nova_compute[243704]: 2025-12-13 04:25:49.508 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:49 compute-0 ceph-mon[75071]: pgmap v1721: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 9.3 KiB/s rd, 1.3 KiB/s wr, 14 op/s
Dec 13 04:25:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1111531141' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1111531141' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-120641085e0ae9b6de794219bb7c470935c3ae522e51ca81cc12a80c2d010f39-merged.mount: Deactivated successfully.
Dec 13 04:25:49 compute-0 podman[275820]: 2025-12-13 04:25:49.862223794 +0000 UTC m=+1.051656799 container remove 90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_carson, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:49 compute-0 systemd[1]: libpod-conmon-90aee18c295a932a3b59d3cd3b35d5339c01c606d4d0696a0698b06072255953.scope: Deactivated successfully.
Dec 13 04:25:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1722: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 3.1 KiB/s wr, 124 op/s
Dec 13 04:25:49 compute-0 sudo[275704]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:49 compute-0 sudo[275836]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:25:49 compute-0 sudo[275836]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:49 compute-0 sudo[275836]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:50 compute-0 sudo[275861]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:25:50 compute-0 sudo[275861]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.388184942 +0000 UTC m=+0.104221265 container create c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.31200139 +0000 UTC m=+0.028037804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:50 compute-0 systemd[1]: Started libpod-conmon-c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc.scope.
Dec 13 04:25:50 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.747172841 +0000 UTC m=+0.463209264 container init c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3)
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.760850433 +0000 UTC m=+0.476886806 container start c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:25:50 compute-0 relaxed_williams[275913]: 167 167
Dec 13 04:25:50 compute-0 systemd[1]: libpod-c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc.scope: Deactivated successfully.
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.767537775 +0000 UTC m=+0.483574138 container attach c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.770130995 +0000 UTC m=+0.486167358 container died c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-09111d52df08a1dbf1de471ca869d807530a1ccd6c617df4d181e8151024ad16-merged.mount: Deactivated successfully.
Dec 13 04:25:50 compute-0 ceph-mon[75071]: pgmap v1722: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 86 KiB/s rd, 3.1 KiB/s wr, 124 op/s
Dec 13 04:25:50 compute-0 podman[275898]: 2025-12-13 04:25:50.936726494 +0000 UTC m=+0.652762847 container remove c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:50 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:50.938 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=20, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=19) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:25:50 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:50.942 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:25:50 compute-0 nova_compute[243704]: 2025-12-13 04:25:50.969 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:50 compute-0 systemd[1]: libpod-conmon-c461b8bc8234442ce79bfc9c8bc109d76280289066a0732dc45430e06d2673dc.scope: Deactivated successfully.
Dec 13 04:25:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:25:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/691990329' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:25:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/691990329' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:51 compute-0 podman[275938]: 2025-12-13 04:25:51.135413175 +0000 UTC m=+0.036875713 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:51 compute-0 podman[275938]: 2025-12-13 04:25:51.42069666 +0000 UTC m=+0.322159188 container create 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:51 compute-0 systemd[1]: Started libpod-conmon-9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851.scope.
Dec 13 04:25:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca570a4ec1eca3dc2cf85387fca6d3c1da89525afafee0f9979b79a399a7fd8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca570a4ec1eca3dc2cf85387fca6d3c1da89525afafee0f9979b79a399a7fd8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca570a4ec1eca3dc2cf85387fca6d3c1da89525afafee0f9979b79a399a7fd8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ca570a4ec1eca3dc2cf85387fca6d3c1da89525afafee0f9979b79a399a7fd8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:51 compute-0 podman[275938]: 2025-12-13 04:25:51.701432222 +0000 UTC m=+0.602894770 container init 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Dec 13 04:25:51 compute-0 podman[275938]: 2025-12-13 04:25:51.712185364 +0000 UTC m=+0.613647892 container start 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:51 compute-0 podman[275938]: 2025-12-13 04:25:51.720600753 +0000 UTC m=+0.622063311 container attach 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:25:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1723: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 3.2 KiB/s wr, 155 op/s
Dec 13 04:25:52 compute-0 goofy_haslett[275954]: {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     "0": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "devices": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "/dev/loop3"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             ],
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_name": "ceph_lv0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_size": "21470642176",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "name": "ceph_lv0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "tags": {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_name": "ceph",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.crush_device_class": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.encrypted": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.objectstore": "bluestore",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_id": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.vdo": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.with_tpm": "0"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             },
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "vg_name": "ceph_vg0"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         }
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     ],
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     "1": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "devices": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "/dev/loop4"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             ],
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_name": "ceph_lv1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_size": "21470642176",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "name": "ceph_lv1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "tags": {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_name": "ceph",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.crush_device_class": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.encrypted": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.objectstore": "bluestore",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_id": "1",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.vdo": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.with_tpm": "0"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             },
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "vg_name": "ceph_vg1"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         }
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     ],
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     "2": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "devices": [
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "/dev/loop5"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             ],
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_name": "ceph_lv2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_size": "21470642176",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "name": "ceph_lv2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "tags": {
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.cluster_name": "ceph",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.crush_device_class": "",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.encrypted": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.objectstore": "bluestore",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osd_id": "2",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.vdo": "0",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:                 "ceph.with_tpm": "0"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             },
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "type": "block",
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:             "vg_name": "ceph_vg2"
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:         }
Dec 13 04:25:52 compute-0 goofy_haslett[275954]:     ]
Dec 13 04:25:52 compute-0 goofy_haslett[275954]: }
Dec 13 04:25:52 compute-0 systemd[1]: libpod-9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851.scope: Deactivated successfully.
Dec 13 04:25:52 compute-0 podman[275938]: 2025-12-13 04:25:52.044854128 +0000 UTC m=+0.946316666 container died 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:25:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/691990329' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:25:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/691990329' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:25:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ca570a4ec1eca3dc2cf85387fca6d3c1da89525afafee0f9979b79a399a7fd8-merged.mount: Deactivated successfully.
Dec 13 04:25:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:52 compute-0 podman[275938]: 2025-12-13 04:25:52.51323134 +0000 UTC m=+1.414693868 container remove 9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_haslett, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Dec 13 04:25:52 compute-0 sudo[275861]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 4.014008180419937e-06 of space, bias 1.0, pg target 0.0012042024541259813 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002910249003629988 of space, bias 1.0, pg target 0.8730747010889964 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.486775487616727e-06 of space, bias 1.0, pg target 0.0007460326462850181 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666971834995881 of space, bias 1.0, pg target 0.20000915504987643 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3092606376761717e-06 of space, bias 4.0, pg target 0.0015711127652114061 quantized to 16 (current 16)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:25:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:25:52 compute-0 systemd[1]: libpod-conmon-9855424621bc397c1f3f667be9c840f6767453f89a17d4a60bba1d0e34c0a851.scope: Deactivated successfully.
Dec 13 04:25:52 compute-0 sudo[275975]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:25:52 compute-0 sudo[275975]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:52 compute-0 sudo[275975]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:52 compute-0 sudo[276000]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:25:52 compute-0 sudo[276000]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:52.971112448 +0000 UTC m=+0.030649514 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.219995064 +0000 UTC m=+0.279532130 container create a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:25:53 compute-0 ceph-mon[75071]: pgmap v1723: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 3.2 KiB/s wr, 155 op/s
Dec 13 04:25:53 compute-0 systemd[1]: Started libpod-conmon-a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f.scope.
Dec 13 04:25:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.314306597 +0000 UTC m=+0.373843653 container init a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.320192467 +0000 UTC m=+0.379729503 container start a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:53 compute-0 intelligent_banzai[276055]: 167 167
Dec 13 04:25:53 compute-0 systemd[1]: libpod-a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f.scope: Deactivated successfully.
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.335198525 +0000 UTC m=+0.394735581 container attach a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.335874923 +0000 UTC m=+0.395411959 container died a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:25:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7a54e4571afff7bc7a9e204c7f429e06a7bb16dcc6137283c7869669cf65e4a-merged.mount: Deactivated successfully.
Dec 13 04:25:53 compute-0 podman[276039]: 2025-12-13 04:25:53.372246732 +0000 UTC m=+0.431783778 container remove a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:25:53 compute-0 systemd[1]: libpod-conmon-a9abf1dfa7d18c0d1d784de4e4d19c1a3db86dc7d4f1a1f690dc18a2d275a59f.scope: Deactivated successfully.
Dec 13 04:25:53 compute-0 podman[276080]: 2025-12-13 04:25:53.554853286 +0000 UTC m=+0.038954360 container create b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:25:53 compute-0 systemd[1]: Started libpod-conmon-b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe.scope.
Dec 13 04:25:53 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69c717565f41ad1051c50a0d50c7c3f45be4290f6c57b527d4bf752bce7cf06/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69c717565f41ad1051c50a0d50c7c3f45be4290f6c57b527d4bf752bce7cf06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69c717565f41ad1051c50a0d50c7c3f45be4290f6c57b527d4bf752bce7cf06/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f69c717565f41ad1051c50a0d50c7c3f45be4290f6c57b527d4bf752bce7cf06/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:25:53 compute-0 podman[276080]: 2025-12-13 04:25:53.616769269 +0000 UTC m=+0.100870363 container init b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:25:53 compute-0 podman[276080]: 2025-12-13 04:25:53.624292464 +0000 UTC m=+0.108393538 container start b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:25:53 compute-0 podman[276080]: 2025-12-13 04:25:53.627891511 +0000 UTC m=+0.111992585 container attach b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:25:53 compute-0 podman[276080]: 2025-12-13 04:25:53.536765535 +0000 UTC m=+0.020866639 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:25:53 compute-0 nova_compute[243704]: 2025-12-13 04:25:53.634 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1724: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.6 KiB/s wr, 132 op/s
Dec 13 04:25:54 compute-0 lvm[276172]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:25:54 compute-0 lvm[276175]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:25:54 compute-0 lvm[276175]: VG ceph_vg1 finished
Dec 13 04:25:54 compute-0 lvm[276172]: VG ceph_vg0 finished
Dec 13 04:25:54 compute-0 lvm[276177]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:25:54 compute-0 lvm[276177]: VG ceph_vg2 finished
Dec 13 04:25:54 compute-0 great_leakey[276096]: {}
Dec 13 04:25:54 compute-0 systemd[1]: libpod-b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe.scope: Deactivated successfully.
Dec 13 04:25:54 compute-0 podman[276080]: 2025-12-13 04:25:54.495934749 +0000 UTC m=+0.980035823 container died b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:25:54 compute-0 systemd[1]: libpod-b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe.scope: Consumed 1.351s CPU time.
Dec 13 04:25:54 compute-0 nova_compute[243704]: 2025-12-13 04:25:54.511 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f69c717565f41ad1051c50a0d50c7c3f45be4290f6c57b527d4bf752bce7cf06-merged.mount: Deactivated successfully.
Dec 13 04:25:54 compute-0 podman[276080]: 2025-12-13 04:25:54.549880345 +0000 UTC m=+1.033981419 container remove b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_leakey, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:25:54 compute-0 systemd[1]: libpod-conmon-b7296c54213d410a51ab4ae36ed19dba5b83f3e4508a8abfb5ff14a695e1f5fe.scope: Deactivated successfully.
Dec 13 04:25:54 compute-0 sudo[276000]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:25:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:25:54 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:54 compute-0 sudo[276192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:25:54 compute-0 sudo[276192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:25:54 compute-0 sudo[276192]: pam_unix(sudo:session): session closed for user root
Dec 13 04:25:55 compute-0 ceph-mon[75071]: pgmap v1724: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 88 KiB/s rd, 2.6 KiB/s wr, 132 op/s
Dec 13 04:25:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:55 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:25:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1725: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 2.1 KiB/s wr, 131 op/s
Dec 13 04:25:55 compute-0 podman[276217]: 2025-12-13 04:25:55.920929967 +0000 UTC m=+0.060507656 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.626391) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956626507, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2477, "num_deletes": 266, "total_data_size": 3645109, "memory_usage": 3711776, "flush_reason": "Manual Compaction"}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956652348, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 3551774, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31827, "largest_seqno": 34303, "table_properties": {"data_size": 3539878, "index_size": 7878, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2949, "raw_key_size": 25085, "raw_average_key_size": 21, "raw_value_size": 3516138, "raw_average_value_size": 3028, "num_data_blocks": 339, "num_entries": 1161, "num_filter_entries": 1161, "num_deletions": 266, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599804, "oldest_key_time": 1765599804, "file_creation_time": 1765599956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 26010 microseconds, and 10915 cpu microseconds.
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.652408) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 3551774 bytes OK
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.652435) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.653954) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.653970) EVENT_LOG_v1 {"time_micros": 1765599956653965, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.653989) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 3634332, prev total WAL file size 3634332, number of live WAL files 2.
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.655148) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(3468KB)], [65(10MB)]
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956655223, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 14778582, "oldest_snapshot_seqno": -1}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6822 keys, 12938059 bytes, temperature: kUnknown
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956772016, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 12938059, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12883413, "index_size": 36486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17093, "raw_key_size": 170657, "raw_average_key_size": 25, "raw_value_size": 12752033, "raw_average_value_size": 1869, "num_data_blocks": 1467, "num_entries": 6822, "num_filter_entries": 6822, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599956, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.772554) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 12938059 bytes
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.774351) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.2 rd, 110.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 10.7 +0.0 blob) out(12.3 +0.0 blob), read-write-amplify(7.8) write-amplify(3.6) OK, records in: 7359, records dropped: 537 output_compression: NoCompression
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.774367) EVENT_LOG_v1 {"time_micros": 1765599956774359, "job": 36, "event": "compaction_finished", "compaction_time_micros": 117123, "compaction_time_cpu_micros": 37990, "output_level": 6, "num_output_files": 1, "total_output_size": 12938059, "num_input_records": 7359, "num_output_records": 6822, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956775645, "job": 36, "event": "table_file_deletion", "file_number": 67}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599956778147, "job": 36, "event": "table_file_deletion", "file_number": 65}
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.655009) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.778374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.778385) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.778388) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.778392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:56 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:25:56.778395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:25:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e468 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:25:57 compute-0 ceph-mon[75071]: pgmap v1725: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 2.1 KiB/s wr, 131 op/s
Dec 13 04:25:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1726: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 2.0 KiB/s wr, 123 op/s
Dec 13 04:25:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:25:57.944 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '20'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:25:58 compute-0 nova_compute[243704]: 2025-12-13 04:25:58.637 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:58 compute-0 ceph-mon[75071]: pgmap v1726: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 84 KiB/s rd, 2.0 KiB/s wr, 123 op/s
Dec 13 04:25:58 compute-0 podman[276237]: 2025-12-13 04:25:58.904666857 +0000 UTC m=+0.055423148 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec 13 04:25:59 compute-0 nova_compute[243704]: 2025-12-13 04:25:59.513 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:25:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e468 do_prune osdmap full prune enabled
Dec 13 04:25:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e469 e469: 3 total, 3 up, 3 in
Dec 13 04:25:59 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e469: 3 total, 3 up, 3 in
Dec 13 04:25:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1728: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 921 B/s wr, 48 op/s
Dec 13 04:26:01 compute-0 ceph-mon[75071]: osdmap e469: 3 total, 3 up, 3 in
Dec 13 04:26:01 compute-0 ceph-mon[75071]: pgmap v1728: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 921 B/s wr, 48 op/s
Dec 13 04:26:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1729: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Dec 13 04:26:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e469 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e469 do_prune osdmap full prune enabled
Dec 13 04:26:02 compute-0 ceph-mon[75071]: pgmap v1729: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Dec 13 04:26:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e470 e470: 3 total, 3 up, 3 in
Dec 13 04:26:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e470: 3 total, 3 up, 3 in
Dec 13 04:26:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328703561' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2328703561' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:03 compute-0 nova_compute[243704]: 2025-12-13 04:26:03.640 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:03 compute-0 ceph-mon[75071]: osdmap e470: 3 total, 3 up, 3 in
Dec 13 04:26:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2328703561' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2328703561' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1731: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 1023 B/s wr, 10 op/s
Dec 13 04:26:04 compute-0 nova_compute[243704]: 2025-12-13 04:26:04.516 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:04 compute-0 ceph-mon[75071]: pgmap v1731: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 7.4 KiB/s rd, 1023 B/s wr, 10 op/s
Dec 13 04:26:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1732: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Dec 13 04:26:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e470 do_prune osdmap full prune enabled
Dec 13 04:26:06 compute-0 ceph-mon[75071]: pgmap v1732: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Dec 13 04:26:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e471 e471: 3 total, 3 up, 3 in
Dec 13 04:26:06 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e471: 3 total, 3 up, 3 in
Dec 13 04:26:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e471 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e471 do_prune osdmap full prune enabled
Dec 13 04:26:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e472 e472: 3 total, 3 up, 3 in
Dec 13 04:26:07 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e472: 3 total, 3 up, 3 in
Dec 13 04:26:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1735: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Dec 13 04:26:07 compute-0 ceph-mon[75071]: osdmap e471: 3 total, 3 up, 3 in
Dec 13 04:26:07 compute-0 ceph-mon[75071]: osdmap e472: 3 total, 3 up, 3 in
Dec 13 04:26:08 compute-0 nova_compute[243704]: 2025-12-13 04:26:08.642 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e472 do_prune osdmap full prune enabled
Dec 13 04:26:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e473 e473: 3 total, 3 up, 3 in
Dec 13 04:26:09 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e473: 3 total, 3 up, 3 in
Dec 13 04:26:09 compute-0 nova_compute[243704]: 2025-12-13 04:26:09.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:09 compute-0 ceph-mon[75071]: pgmap v1735: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 1.5 KiB/s wr, 45 op/s
Dec 13 04:26:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1737: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Dec 13 04:26:11 compute-0 ceph-mon[75071]: osdmap e473: 3 total, 3 up, 3 in
Dec 13 04:26:11 compute-0 ceph-mon[75071]: pgmap v1737: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Dec 13 04:26:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1738: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Dec 13 04:26:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.336691) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599972336749, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 427, "num_deletes": 250, "total_data_size": 298636, "memory_usage": 306352, "flush_reason": "Manual Compaction"}
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599972526596, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 280437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34304, "largest_seqno": 34730, "table_properties": {"data_size": 277943, "index_size": 594, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6700, "raw_average_key_size": 20, "raw_value_size": 272858, "raw_average_value_size": 836, "num_data_blocks": 26, "num_entries": 326, "num_filter_entries": 326, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599957, "oldest_key_time": 1765599957, "file_creation_time": 1765599972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 189970 microseconds, and 2806 cpu microseconds.
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.526656) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 280437 bytes OK
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.526682) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.680806) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.680864) EVENT_LOG_v1 {"time_micros": 1765599972680851, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.680896) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 295985, prev total WAL file size 323436, number of live WAL files 2.
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.681622) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303033' seq:72057594037927935, type:22 .. '6D6772737461740031323534' seq:0, type:0; will stop at (end)
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(273KB)], [68(12MB)]
Dec 13 04:26:12 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599972681717, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 13218496, "oldest_snapshot_seqno": -1}
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6638 keys, 9879394 bytes, temperature: kUnknown
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599973127809, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 9879394, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9830558, "index_size": 31124, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 167084, "raw_average_key_size": 25, "raw_value_size": 9706864, "raw_average_value_size": 1462, "num_data_blocks": 1243, "num_entries": 6638, "num_filter_entries": 6638, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765599972, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.128365) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 9879394 bytes
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.230835) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 29.6 rd, 22.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 12.3 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(82.4) write-amplify(35.2) OK, records in: 7148, records dropped: 510 output_compression: NoCompression
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.230891) EVENT_LOG_v1 {"time_micros": 1765599973230868, "job": 38, "event": "compaction_finished", "compaction_time_micros": 446264, "compaction_time_cpu_micros": 55178, "output_level": 6, "num_output_files": 1, "total_output_size": 9879394, "num_input_records": 7148, "num_output_records": 6638, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599973231517, "job": 38, "event": "table_file_deletion", "file_number": 70}
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765599973236541, "job": 38, "event": "table_file_deletion", "file_number": 68}
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:12.681480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.236667) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.236676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.236679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.236682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:26:13.236686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:26:13 compute-0 nova_compute[243704]: 2025-12-13 04:26:13.643 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1739: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 27 op/s
Dec 13 04:26:14 compute-0 ceph-mon[75071]: pgmap v1738: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Dec 13 04:26:14 compute-0 nova_compute[243704]: 2025-12-13 04:26:14.520 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:15 compute-0 podman[276258]: 2025-12-13 04:26:15.030027345 +0000 UTC m=+0.161625845 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec 13 04:26:15 compute-0 ceph-mon[75071]: pgmap v1739: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.9 KiB/s wr, 27 op/s
Dec 13 04:26:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1740: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Dec 13 04:26:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226072230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2226072230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:16 compute-0 ceph-mon[75071]: pgmap v1740: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.6 KiB/s wr, 34 op/s
Dec 13 04:26:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2226072230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2226072230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e473 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:17 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e473 do_prune osdmap full prune enabled
Dec 13 04:26:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1741: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Dec 13 04:26:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e474 e474: 3 total, 3 up, 3 in
Dec 13 04:26:18 compute-0 nova_compute[243704]: 2025-12-13 04:26:18.647 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:18 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e474: 3 total, 3 up, 3 in
Dec 13 04:26:19 compute-0 nova_compute[243704]: 2025-12-13 04:26:19.523 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1743: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 204 B/s wr, 16 op/s
Dec 13 04:26:20 compute-0 ceph-mon[75071]: pgmap v1741: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 29 op/s
Dec 13 04:26:20 compute-0 ceph-mon[75071]: osdmap e474: 3 total, 3 up, 3 in
Dec 13 04:26:21 compute-0 ceph-mon[75071]: pgmap v1743: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 204 B/s wr, 16 op/s
Dec 13 04:26:21 compute-0 nova_compute[243704]: 2025-12-13 04:26:21.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.918 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.919 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.920 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.920 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:26:22 compute-0 nova_compute[243704]: 2025-12-13 04:26:22.920 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:26:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e474 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e474 do_prune osdmap full prune enabled
Dec 13 04:26:23 compute-0 ceph-mon[75071]: pgmap v1744: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 409 B/s wr, 17 op/s
Dec 13 04:26:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:26:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3990993448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e475 e475: 3 total, 3 up, 3 in
Dec 13 04:26:23 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e475: 3 total, 3 up, 3 in
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.513 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.592s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.650 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.745 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.747 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4345MB free_disk=59.988054814748466GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.747 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.748 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.829 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.830 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:26:23 compute-0 nova_compute[243704]: 2025-12-13 04:26:23.849 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:26:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1746: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 383 B/s wr, 8 op/s
Dec 13 04:26:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3990993448' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:24 compute-0 ceph-mon[75071]: osdmap e475: 3 total, 3 up, 3 in
Dec 13 04:26:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:26:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2896906495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.526 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.550 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.701s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.557 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.569 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.571 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:26:24 compute-0 nova_compute[243704]: 2025-12-13 04:26:24.571 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:26:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e475 do_prune osdmap full prune enabled
Dec 13 04:26:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e476 e476: 3 total, 3 up, 3 in
Dec 13 04:26:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e476: 3 total, 3 up, 3 in
Dec 13 04:26:25 compute-0 ceph-mon[75071]: pgmap v1746: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 5.6 KiB/s rd, 383 B/s wr, 8 op/s
Dec 13 04:26:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2896906495' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:26:25 compute-0 nova_compute[243704]: 2025-12-13 04:26:25.557 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:25 compute-0 nova_compute[243704]: 2025-12-13 04:26:25.558 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:25 compute-0 nova_compute[243704]: 2025-12-13 04:26:25.558 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:25 compute-0 nova_compute[243704]: 2025-12-13 04:26:25.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1748: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 24 op/s
Dec 13 04:26:26 compute-0 ceph-mon[75071]: osdmap e476: 3 total, 3 up, 3 in
Dec 13 04:26:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2566950840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2566950840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:26 compute-0 podman[276329]: 2025-12-13 04:26:26.954840933 +0000 UTC m=+0.091715765 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:26:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Dec 13 04:26:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e476 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:28 compute-0 ceph-mon[75071]: pgmap v1748: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.5 KiB/s wr, 24 op/s
Dec 13 04:26:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2566950840' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2566950840' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:28 compute-0 nova_compute[243704]: 2025-12-13 04:26:28.678 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:28 compute-0 nova_compute[243704]: 2025-12-13 04:26:28.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:29 compute-0 nova_compute[243704]: 2025-12-13 04:26:29.530 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:29 compute-0 ceph-mon[75071]: pgmap v1749: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 9.2 KiB/s rd, 1.4 KiB/s wr, 14 op/s
Dec 13 04:26:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 13 04:26:29 compute-0 podman[276349]: 2025-12-13 04:26:29.947890647 +0000 UTC m=+0.095782785 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:26:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e476 do_prune osdmap full prune enabled
Dec 13 04:26:31 compute-0 ceph-mon[75071]: pgmap v1750: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.7 KiB/s wr, 40 op/s
Dec 13 04:26:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e477 e477: 3 total, 3 up, 3 in
Dec 13 04:26:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e477: 3 total, 3 up, 3 in
Dec 13 04:26:31 compute-0 nova_compute[243704]: 2025-12-13 04:26:31.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:31 compute-0 nova_compute[243704]: 2025-12-13 04:26:31.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:26:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1752: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Dec 13 04:26:32 compute-0 ceph-mon[75071]: osdmap e477: 3 total, 3 up, 3 in
Dec 13 04:26:32 compute-0 nova_compute[243704]: 2025-12-13 04:26:32.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:26:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e477 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e477 do_prune osdmap full prune enabled
Dec 13 04:26:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e478 e478: 3 total, 3 up, 3 in
Dec 13 04:26:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e478: 3 total, 3 up, 3 in
Dec 13 04:26:33 compute-0 ceph-mon[75071]: pgmap v1752: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 2.1 KiB/s wr, 45 op/s
Dec 13 04:26:33 compute-0 ceph-mon[75071]: osdmap e478: 3 total, 3 up, 3 in
Dec 13 04:26:33 compute-0 nova_compute[243704]: 2025-12-13 04:26:33.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Dec 13 04:26:34 compute-0 nova_compute[243704]: 2025-12-13 04:26:34.533 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3467311598' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3467311598' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:26:35.103 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:26:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:26:35.103 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:26:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:26:35.104 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:26:35 compute-0 ceph-mon[75071]: pgmap v1754: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Dec 13 04:26:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3467311598' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:35 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3467311598' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Dec 13 04:26:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e478 do_prune osdmap full prune enabled
Dec 13 04:26:37 compute-0 ceph-mon[75071]: pgmap v1755: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 3.2 KiB/s wr, 73 op/s
Dec 13 04:26:37 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e479 e479: 3 total, 3 up, 3 in
Dec 13 04:26:37 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e479: 3 total, 3 up, 3 in
Dec 13 04:26:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Dec 13 04:26:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e479 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e479 do_prune osdmap full prune enabled
Dec 13 04:26:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e480 e480: 3 total, 3 up, 3 in
Dec 13 04:26:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e480: 3 total, 3 up, 3 in
Dec 13 04:26:38 compute-0 ceph-mon[75071]: osdmap e479: 3 total, 3 up, 3 in
Dec 13 04:26:38 compute-0 ceph-mon[75071]: osdmap e480: 3 total, 3 up, 3 in
Dec 13 04:26:38 compute-0 nova_compute[243704]: 2025-12-13 04:26:38.682 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:39 compute-0 nova_compute[243704]: 2025-12-13 04:26:39.535 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e480 do_prune osdmap full prune enabled
Dec 13 04:26:39 compute-0 ceph-mon[75071]: pgmap v1757: 305 pgs: 305 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.7 KiB/s wr, 52 op/s
Dec 13 04:26:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e481 e481: 3 total, 3 up, 3 in
Dec 13 04:26:39 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e481: 3 total, 3 up, 3 in
Dec 13 04:26:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.8 KiB/s wr, 87 op/s
Dec 13 04:26:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:26:40
Dec 13 04:26:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:26:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:26:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.control', '.mgr']
Dec 13 04:26:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:26:40 compute-0 ceph-mon[75071]: osdmap e481: 3 total, 3 up, 3 in
Dec 13 04:26:40 compute-0 ceph-mon[75071]: pgmap v1760: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 65 KiB/s rd, 4.8 KiB/s wr, 87 op/s
Dec 13 04:26:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 37 op/s
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:26:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:26:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597809272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:42 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:42 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2597809272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:43 compute-0 ceph-mon[75071]: pgmap v1761: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 2.3 KiB/s wr, 37 op/s
Dec 13 04:26:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2597809272' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:43 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2597809272' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:26:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:43 compute-0 nova_compute[243704]: 2025-12-13 04:26:43.685 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1762: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Dec 13 04:26:44 compute-0 nova_compute[243704]: 2025-12-13 04:26:44.538 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:45 compute-0 ceph-mon[75071]: pgmap v1762: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 2.2 KiB/s wr, 36 op/s
Dec 13 04:26:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1074835989' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1074835989' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271853807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/271853807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Dec 13 04:26:45 compute-0 podman[276370]: 2025-12-13 04:26:45.982283561 +0000 UTC m=+0.119512280 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 04:26:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1074835989' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1074835989' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/271853807' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/271853807' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:47 compute-0 ceph-mon[75071]: pgmap v1763: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 3.0 KiB/s wr, 68 op/s
Dec 13 04:26:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1764: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Dec 13 04:26:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e481 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e481 do_prune osdmap full prune enabled
Dec 13 04:26:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 e482: 3 total, 3 up, 3 in
Dec 13 04:26:48 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e482: 3 total, 3 up, 3 in
Dec 13 04:26:48 compute-0 nova_compute[243704]: 2025-12-13 04:26:48.687 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283020035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:49 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/283020035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:49 compute-0 ceph-mon[75071]: pgmap v1764: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.5 KiB/s wr, 56 op/s
Dec 13 04:26:49 compute-0 ceph-mon[75071]: osdmap e482: 3 total, 3 up, 3 in
Dec 13 04:26:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/283020035' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:49 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/283020035' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:49 compute-0 nova_compute[243704]: 2025-12-13 04:26:49.541 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Dec 13 04:26:51 compute-0 ceph-mon[75071]: pgmap v1766: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.7 KiB/s wr, 53 op/s
Dec 13 04:26:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344791851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1344791851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Dec 13 04:26:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1344791851' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1344791851' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.6853112893124115e-06 of space, bias 1.0, pg target 0.0011055933867937233 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.002910416752067 of space, bias 1.0, pg target 0.8731250256201001 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.483033944505216e-06 of space, bias 1.0, pg target 0.0007449101833515647 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.20001079449781242 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3125209034579863e-06 of space, bias 4.0, pg target 0.0015750250841495836 quantized to 16 (current 16)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:26:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:26:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:53 compute-0 ceph-mon[75071]: pgmap v1767: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Dec 13 04:26:53 compute-0 nova_compute[243704]: 2025-12-13 04:26:53.689 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1768: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Dec 13 04:26:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:26:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543096650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:26:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2543096650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2543096650' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:26:54 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2543096650' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:26:54 compute-0 nova_compute[243704]: 2025-12-13 04:26:54.544 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:54 compute-0 sudo[276396]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:26:54 compute-0 sudo[276396]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:54 compute-0 sudo[276396]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:54 compute-0 sudo[276421]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:26:54 compute-0 sudo[276421]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: pgmap v1768: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.9 KiB/s wr, 51 op/s
Dec 13 04:26:55 compute-0 sudo[276421]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:26:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:26:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:55 compute-0 sudo[276477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:26:55 compute-0 sudo[276477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:55 compute-0 sudo[276477]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:55 compute-0 sudo[276502]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:26:55 compute-0 sudo[276502]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.000556301 +0000 UTC m=+0.075758951 container create 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:26:56 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:26:56.029 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=21, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=20) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:26:56 compute-0 nova_compute[243704]: 2025-12-13 04:26:56.030 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:56 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:26:56.031 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:26:56 compute-0 systemd[1]: Started libpod-conmon-56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527.scope.
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:55.962221808 +0000 UTC m=+0.037424508 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.100749055 +0000 UTC m=+0.175951735 container init 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.110787027 +0000 UTC m=+0.185989677 container start 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.115090044 +0000 UTC m=+0.190292744 container attach 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:26:56 compute-0 cool_bose[276555]: 167 167
Dec 13 04:26:56 compute-0 systemd[1]: libpod-56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527.scope: Deactivated successfully.
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.119920836 +0000 UTC m=+0.195123516 container died 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:26:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad9cabca98a7634f68317686c6aa579986d3b8f136a2aa39afe92f4d90f380e8-merged.mount: Deactivated successfully.
Dec 13 04:26:56 compute-0 podman[276539]: 2025-12-13 04:26:56.160211431 +0000 UTC m=+0.235414081 container remove 56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_bose, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 04:26:56 compute-0 systemd[1]: libpod-conmon-56aa4613b3e8633fd876aceb0fd17f3ffc861ad04169a246779dd7cc0b164527.scope: Deactivated successfully.
Dec 13 04:26:56 compute-0 podman[276579]: 2025-12-13 04:26:56.363256791 +0000 UTC m=+0.043284938 container create 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:26:56 compute-0 systemd[1]: Started libpod-conmon-4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec.scope.
Dec 13 04:26:56 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:56 compute-0 podman[276579]: 2025-12-13 04:26:56.343298728 +0000 UTC m=+0.023326895 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:56 compute-0 podman[276579]: 2025-12-13 04:26:56.456020583 +0000 UTC m=+0.136048820 container init 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:26:56 compute-0 podman[276579]: 2025-12-13 04:26:56.468559183 +0000 UTC m=+0.148587340 container start 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:26:56 compute-0 podman[276579]: 2025-12-13 04:26:56.473932819 +0000 UTC m=+0.153961066 container attach 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:26:56 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:26:57 compute-0 objective_chatelet[276596]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:26:57 compute-0 objective_chatelet[276596]: --> All data devices are unavailable
Dec 13 04:26:57 compute-0 systemd[1]: libpod-4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec.scope: Deactivated successfully.
Dec 13 04:26:57 compute-0 podman[276579]: 2025-12-13 04:26:57.03565051 +0000 UTC m=+0.715678657 container died 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Dec 13 04:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-882530890ae5b5e6e873f0c04b3f3b6d3a7ad461557718266932f2303cc99ebd-merged.mount: Deactivated successfully.
Dec 13 04:26:57 compute-0 podman[276579]: 2025-12-13 04:26:57.106535746 +0000 UTC m=+0.786563893 container remove 4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:26:57 compute-0 systemd[1]: libpod-conmon-4a66265a892c17e8dd4133f6000c6161c94175c1436cbb9d7ac0a1c49dbd63ec.scope: Deactivated successfully.
Dec 13 04:26:57 compute-0 sudo[276502]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:57 compute-0 podman[276617]: 2025-12-13 04:26:57.183659372 +0000 UTC m=+0.078874965 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent)
Dec 13 04:26:57 compute-0 sudo[276644]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:26:57 compute-0 sudo[276644]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:57 compute-0 sudo[276644]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:57 compute-0 sudo[276669]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:26:57 compute-0 sudo[276669]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:57 compute-0 ceph-mon[75071]: pgmap v1769: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.609150569 +0000 UTC m=+0.068885983 container create e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:26:57 compute-0 systemd[1]: Started libpod-conmon-e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a.scope.
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.582352911 +0000 UTC m=+0.042088365 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:57 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.710903776 +0000 UTC m=+0.170639190 container init e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.723479897 +0000 UTC m=+0.183215291 container start e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.726221122 +0000 UTC m=+0.185956536 container attach e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:26:57 compute-0 cool_ishizaka[276723]: 167 167
Dec 13 04:26:57 compute-0 systemd[1]: libpod-e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a.scope: Deactivated successfully.
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.7298356 +0000 UTC m=+0.189570994 container died e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:26:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-2487d89a782626e72da60a9ea4eb6a7fa9ae89d56c32f596d9448a02541f40d4-merged.mount: Deactivated successfully.
Dec 13 04:26:57 compute-0 podman[276707]: 2025-12-13 04:26:57.760543355 +0000 UTC m=+0.220278749 container remove e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cool_ishizaka, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:26:57 compute-0 systemd[1]: libpod-conmon-e2ecadf562ee0ff2c003dbb5de2cba9a1ebab7fac84d419066d13a5789e24e4a.scope: Deactivated successfully.
Dec 13 04:26:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Dec 13 04:26:57 compute-0 podman[276747]: 2025-12-13 04:26:57.968535119 +0000 UTC m=+0.049834185 container create cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:26:58 compute-0 systemd[1]: Started libpod-conmon-cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056.scope.
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:57.948387491 +0000 UTC m=+0.029686567 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafe8e583e797e9f04ef6d9160525b50e04f72d0bd51cb8062fc778a9205351/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafe8e583e797e9f04ef6d9160525b50e04f72d0bd51cb8062fc778a9205351/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafe8e583e797e9f04ef6d9160525b50e04f72d0bd51cb8062fc778a9205351/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/faafe8e583e797e9f04ef6d9160525b50e04f72d0bd51cb8062fc778a9205351/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:58.066535494 +0000 UTC m=+0.147834600 container init cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:58.081011217 +0000 UTC m=+0.162310323 container start cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:58.084841141 +0000 UTC m=+0.166140247 container attach cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Dec 13 04:26:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:26:58 compute-0 strange_kirch[276764]: {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     "0": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "devices": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "/dev/loop3"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             ],
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_name": "ceph_lv0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_size": "21470642176",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "name": "ceph_lv0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "tags": {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_name": "ceph",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.crush_device_class": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.encrypted": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.objectstore": "bluestore",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_id": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.vdo": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.with_tpm": "0"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             },
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "vg_name": "ceph_vg0"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         }
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     ],
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     "1": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "devices": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "/dev/loop4"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             ],
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_name": "ceph_lv1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_size": "21470642176",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "name": "ceph_lv1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "tags": {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_name": "ceph",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.crush_device_class": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.encrypted": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.objectstore": "bluestore",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_id": "1",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.vdo": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.with_tpm": "0"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             },
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "vg_name": "ceph_vg1"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         }
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     ],
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     "2": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "devices": [
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "/dev/loop5"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             ],
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_name": "ceph_lv2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_size": "21470642176",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "name": "ceph_lv2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "tags": {
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.cluster_name": "ceph",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.crush_device_class": "",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.encrypted": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.objectstore": "bluestore",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osd_id": "2",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.vdo": "0",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:                 "ceph.with_tpm": "0"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             },
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "type": "block",
Dec 13 04:26:58 compute-0 strange_kirch[276764]:             "vg_name": "ceph_vg2"
Dec 13 04:26:58 compute-0 strange_kirch[276764]:         }
Dec 13 04:26:58 compute-0 strange_kirch[276764]:     ]
Dec 13 04:26:58 compute-0 strange_kirch[276764]: }
Dec 13 04:26:58 compute-0 systemd[1]: libpod-cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056.scope: Deactivated successfully.
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:58.398148278 +0000 UTC m=+0.479447344 container died cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Dec 13 04:26:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-faafe8e583e797e9f04ef6d9160525b50e04f72d0bd51cb8062fc778a9205351-merged.mount: Deactivated successfully.
Dec 13 04:26:58 compute-0 podman[276747]: 2025-12-13 04:26:58.439747079 +0000 UTC m=+0.521046135 container remove cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_kirch, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:26:58 compute-0 systemd[1]: libpod-conmon-cbf907a6b857ffd71bfc18496eca777faf40600e23cbf05a65aa8cc035d1a056.scope: Deactivated successfully.
Dec 13 04:26:58 compute-0 sudo[276669]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:58 compute-0 sudo[276785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:26:58 compute-0 sudo[276785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:58 compute-0 sudo[276785]: pam_unix(sudo:session): session closed for user root
Dec 13 04:26:58 compute-0 sudo[276810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:26:58 compute-0 sudo[276810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:26:58 compute-0 nova_compute[243704]: 2025-12-13 04:26:58.691 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:58 compute-0 podman[276848]: 2025-12-13 04:26:58.933647415 +0000 UTC m=+0.036524044 container create e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:26:58 compute-0 systemd[1]: Started libpod-conmon-e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818.scope.
Dec 13 04:26:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:58 compute-0 podman[276848]: 2025-12-13 04:26:58.99747984 +0000 UTC m=+0.100356479 container init e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:26:59 compute-0 podman[276848]: 2025-12-13 04:26:59.004150511 +0000 UTC m=+0.107027160 container start e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:26:59 compute-0 podman[276848]: 2025-12-13 04:26:59.008561122 +0000 UTC m=+0.111437761 container attach e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:26:59 compute-0 exciting_merkle[276865]: 167 167
Dec 13 04:26:59 compute-0 systemd[1]: libpod-e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818.scope: Deactivated successfully.
Dec 13 04:26:59 compute-0 podman[276848]: 2025-12-13 04:26:59.010460013 +0000 UTC m=+0.113336622 container died e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:26:59 compute-0 podman[276848]: 2025-12-13 04:26:58.918340649 +0000 UTC m=+0.021217288 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-406dd9bf1bbf20dc1a09bc515b0c694fadc74aea6cdf11c835c8b8e2e2b7b751-merged.mount: Deactivated successfully.
Dec 13 04:26:59 compute-0 podman[276848]: 2025-12-13 04:26:59.052981509 +0000 UTC m=+0.155858138 container remove e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_merkle, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:26:59 compute-0 systemd[1]: libpod-conmon-e410eff75500ab101aa43fd4bb4bbcb88b2a985caa5bb866790632de594ea818.scope: Deactivated successfully.
Dec 13 04:26:59 compute-0 podman[276889]: 2025-12-13 04:26:59.283229608 +0000 UTC m=+0.079510073 container create 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:26:59 compute-0 systemd[1]: Started libpod-conmon-86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058.scope.
Dec 13 04:26:59 compute-0 podman[276889]: 2025-12-13 04:26:59.254703113 +0000 UTC m=+0.050983638 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:26:59 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c169264eabf4778d354f21c41e79c00e25f911ceb0d5a4204cfa62a660583a4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c169264eabf4778d354f21c41e79c00e25f911ceb0d5a4204cfa62a660583a4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c169264eabf4778d354f21c41e79c00e25f911ceb0d5a4204cfa62a660583a4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c169264eabf4778d354f21c41e79c00e25f911ceb0d5a4204cfa62a660583a4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:26:59 compute-0 podman[276889]: 2025-12-13 04:26:59.389338933 +0000 UTC m=+0.185619468 container init 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:26:59 compute-0 podman[276889]: 2025-12-13 04:26:59.397092084 +0000 UTC m=+0.193372539 container start 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Dec 13 04:26:59 compute-0 podman[276889]: 2025-12-13 04:26:59.401212975 +0000 UTC m=+0.197493500 container attach 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:26:59 compute-0 ceph-mon[75071]: pgmap v1770: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 KiB/s wr, 53 op/s
Dec 13 04:26:59 compute-0 nova_compute[243704]: 2025-12-13 04:26:59.546 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:26:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 KiB/s wr, 45 op/s
Dec 13 04:27:00 compute-0 lvm[276992]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:27:00 compute-0 lvm[276993]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:27:00 compute-0 lvm[276992]: VG ceph_vg0 finished
Dec 13 04:27:00 compute-0 lvm[276993]: VG ceph_vg1 finished
Dec 13 04:27:00 compute-0 lvm[276995]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:27:00 compute-0 lvm[276995]: VG ceph_vg2 finished
Dec 13 04:27:00 compute-0 podman[276982]: 2025-12-13 04:27:00.135062175 +0000 UTC m=+0.064672619 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:27:00 compute-0 busy_ishizaka[276906]: {}
Dec 13 04:27:00 compute-0 systemd[1]: libpod-86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058.scope: Deactivated successfully.
Dec 13 04:27:00 compute-0 systemd[1]: libpod-86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058.scope: Consumed 1.392s CPU time.
Dec 13 04:27:00 compute-0 podman[276889]: 2025-12-13 04:27:00.220081275 +0000 UTC m=+1.016361730 container died 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:27:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-5c169264eabf4778d354f21c41e79c00e25f911ceb0d5a4204cfa62a660583a4-merged.mount: Deactivated successfully.
Dec 13 04:27:00 compute-0 podman[276889]: 2025-12-13 04:27:00.260501265 +0000 UTC m=+1.056781710 container remove 86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ishizaka, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:27:00 compute-0 systemd[1]: libpod-conmon-86673b03c203fe6d84d28e0cae1253acc7e8145d67bc4a79dd978c2437b68058.scope: Deactivated successfully.
Dec 13 04:27:00 compute-0 sudo[276810]: pam_unix(sudo:session): session closed for user root
Dec 13 04:27:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:27:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:27:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:27:00 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:27:00 compute-0 sudo[277020]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:27:00 compute-0 sudo[277020]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:27:00 compute-0 sudo[277020]: pam_unix(sudo:session): session closed for user root
Dec 13 04:27:01 compute-0 ceph-mon[75071]: pgmap v1771: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 1.8 KiB/s wr, 45 op/s
Dec 13 04:27:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:27:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:27:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Dec 13 04:27:02 compute-0 ceph-mon[75071]: pgmap v1772: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.3 KiB/s wr, 29 op/s
Dec 13 04:27:03 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:27:03.034 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '21'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:27:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:03 compute-0 nova_compute[243704]: 2025-12-13 04:27:03.693 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:27:04 compute-0 nova_compute[243704]: 2025-12-13 04:27:04.548 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:04 compute-0 ceph-mon[75071]: pgmap v1773: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:27:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:27:07 compute-0 ceph-mon[75071]: pgmap v1774: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1023 B/s wr, 28 op/s
Dec 13 04:27:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:08 compute-0 nova_compute[243704]: 2025-12-13 04:27:08.695 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:09 compute-0 ceph-mon[75071]: pgmap v1775: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:09 compute-0 nova_compute[243704]: 2025-12-13 04:27:09.551 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:11 compute-0 ceph-mon[75071]: pgmap v1776: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:13 compute-0 ceph-mon[75071]: pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:13 compute-0 nova_compute[243704]: 2025-12-13 04:27:13.697 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:14 compute-0 nova_compute[243704]: 2025-12-13 04:27:14.554 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:15 compute-0 ceph-mon[75071]: pgmap v1778: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:16 compute-0 podman[277045]: 2025-12-13 04:27:16.470030789 +0000 UTC m=+0.102071836 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:27:17 compute-0 ceph-mon[75071]: pgmap v1779: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:18 compute-0 nova_compute[243704]: 2025-12-13 04:27:18.737 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:19 compute-0 ceph-mon[75071]: pgmap v1780: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:19 compute-0 nova_compute[243704]: 2025-12-13 04:27:19.556 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:21 compute-0 ceph-mon[75071]: pgmap v1781: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:21 compute-0 nova_compute[243704]: 2025-12-13 04:27:21.888 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.917 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.919 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.919 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.920 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:27:22 compute-0 nova_compute[243704]: 2025-12-13 04:27:22.920 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:27:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:27:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988103453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.490 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.570s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:27:23 compute-0 ceph-mon[75071]: pgmap v1782: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2988103453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.658 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.660 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4343MB free_disk=59.98806025274098GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.660 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.661 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.740 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.903 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.904 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:27:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:23 compute-0 nova_compute[243704]: 2025-12-13 04:27:23.963 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:27:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:27:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/226976310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.546 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.553 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.558 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/226976310' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.566 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.567 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:27:24 compute-0 nova_compute[243704]: 2025-12-13 04:27:24.568 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:27:25 compute-0 nova_compute[243704]: 2025-12-13 04:27:25.553 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:25 compute-0 ceph-mon[75071]: pgmap v1783: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:25 compute-0 nova_compute[243704]: 2025-12-13 04:27:25.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:26 compute-0 nova_compute[243704]: 2025-12-13 04:27:26.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:26 compute-0 nova_compute[243704]: 2025-12-13 04:27:26.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:27 compute-0 podman[277116]: 2025-12-13 04:27:27.962635717 +0000 UTC m=+0.094175900 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:27:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:28 compute-0 ceph-mon[75071]: pgmap v1784: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:28 compute-0 nova_compute[243704]: 2025-12-13 04:27:28.742 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:28 compute-0 nova_compute[243704]: 2025-12-13 04:27:28.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e482 do_prune osdmap full prune enabled
Dec 13 04:27:29 compute-0 ceph-mon[75071]: pgmap v1785: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:27:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e483 e483: 3 total, 3 up, 3 in
Dec 13 04:27:29 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e483: 3 total, 3 up, 3 in
Dec 13 04:27:29 compute-0 nova_compute[243704]: 2025-12-13 04:27:29.574 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 409 B/s wr, 4 op/s
Dec 13 04:27:30 compute-0 ceph-mon[75071]: osdmap e483: 3 total, 3 up, 3 in
Dec 13 04:27:30 compute-0 podman[277136]: 2025-12-13 04:27:30.950605973 +0000 UTC m=+0.088891407 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:27:31 compute-0 ceph-mon[75071]: pgmap v1787: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.9 KiB/s rd, 409 B/s wr, 4 op/s
Dec 13 04:27:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 614 B/s wr, 5 op/s
Dec 13 04:27:32 compute-0 ovn_controller[145204]: 2025-12-13T04:27:32Z|00239|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Dec 13 04:27:32 compute-0 nova_compute[243704]: 2025-12-13 04:27:32.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:32 compute-0 nova_compute[243704]: 2025-12-13 04:27:32.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:27:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e483 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e483 do_prune osdmap full prune enabled
Dec 13 04:27:33 compute-0 ceph-mon[75071]: pgmap v1788: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.1 KiB/s rd, 614 B/s wr, 5 op/s
Dec 13 04:27:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e484 e484: 3 total, 3 up, 3 in
Dec 13 04:27:33 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e484: 3 total, 3 up, 3 in
Dec 13 04:27:33 compute-0 nova_compute[243704]: 2025-12-13 04:27:33.745 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:27:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215665375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:27:33 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3215665375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.9 KiB/s rd, 767 B/s wr, 6 op/s
Dec 13 04:27:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:27:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2734455189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:27:34 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2734455189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 ceph-mon[75071]: osdmap e484: 3 total, 3 up, 3 in
Dec 13 04:27:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3215665375' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3215665375' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2734455189' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2734455189' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:34 compute-0 nova_compute[243704]: 2025-12-13 04:27:34.576 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:34 compute-0 nova_compute[243704]: 2025-12-13 04:27:34.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:34 compute-0 nova_compute[243704]: 2025-12-13 04:27:34.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:27:34 compute-0 nova_compute[243704]: 2025-12-13 04:27:34.895 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:27:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:27:35.103 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:27:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:27:35.104 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:27:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:27:35.104 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:27:35 compute-0 ceph-mon[75071]: pgmap v1790: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.9 KiB/s rd, 767 B/s wr, 6 op/s
Dec 13 04:27:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 KiB/s wr, 109 op/s
Dec 13 04:27:37 compute-0 ceph-mon[75071]: pgmap v1791: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.2 KiB/s wr, 109 op/s
Dec 13 04:27:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 KiB/s wr, 97 op/s
Dec 13 04:27:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e484 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e484 do_prune osdmap full prune enabled
Dec 13 04:27:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 e485: 3 total, 3 up, 3 in
Dec 13 04:27:38 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e485: 3 total, 3 up, 3 in
Dec 13 04:27:38 compute-0 nova_compute[243704]: 2025-12-13 04:27:38.747 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:39 compute-0 ceph-mon[75071]: pgmap v1792: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.6 KiB/s wr, 97 op/s
Dec 13 04:27:39 compute-0 ceph-mon[75071]: osdmap e485: 3 total, 3 up, 3 in
Dec 13 04:27:39 compute-0 nova_compute[243704]: 2025-12-13 04:27:39.579 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:39 compute-0 nova_compute[243704]: 2025-12-13 04:27:39.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 KiB/s wr, 115 op/s
Dec 13 04:27:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:27:40
Dec 13 04:27:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:27:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:27:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta']
Dec 13 04:27:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:27:40 compute-0 nova_compute[243704]: 2025-12-13 04:27:40.890 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:27:40 compute-0 nova_compute[243704]: 2025-12-13 04:27:40.890 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:27:41 compute-0 ceph-mon[75071]: pgmap v1794: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.5 KiB/s wr, 115 op/s
Dec 13 04:27:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 108 op/s
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:27:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:27:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:43 compute-0 ceph-mon[75071]: pgmap v1795: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.3 KiB/s wr, 108 op/s
Dec 13 04:27:43 compute-0 nova_compute[243704]: 2025-12-13 04:27:43.790 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 KiB/s wr, 92 op/s
Dec 13 04:27:44 compute-0 nova_compute[243704]: 2025-12-13 04:27:44.581 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:45 compute-0 ceph-mon[75071]: pgmap v1796: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.8 KiB/s wr, 92 op/s
Dec 13 04:27:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:27:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613183557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:27:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2613183557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 16 op/s
Dec 13 04:27:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2613183557' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:27:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2613183557' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:27:46 compute-0 ceph-mon[75071]: pgmap v1797: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 16 op/s
Dec 13 04:27:46 compute-0 podman[277156]: 2025-12-13 04:27:46.99819056 +0000 UTC m=+0.137446199 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec 13 04:27:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 16 op/s
Dec 13 04:27:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:48 compute-0 nova_compute[243704]: 2025-12-13 04:27:48.834 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:49 compute-0 ceph-mon[75071]: pgmap v1798: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 26 KiB/s wr, 16 op/s
Dec 13 04:27:49 compute-0 nova_compute[243704]: 2025-12-13 04:27:49.583 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 23 KiB/s wr, 14 op/s
Dec 13 04:27:51 compute-0 ceph-mon[75071]: pgmap v1799: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 23 KiB/s wr, 14 op/s
Dec 13 04:27:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.690589814863921e-06 of space, bias 1.0, pg target 0.0011071769444591763 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029145459097298185 of space, bias 1.0, pg target 0.8743637729189455 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.480705183232491e-06 of space, bias 1.0, pg target 0.0007442115549697474 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.20001079449781242 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3120241010531384e-06 of space, bias 4.0, pg target 0.0015744289212637661 quantized to 16 (current 16)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:27:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:27:53 compute-0 ceph-mon[75071]: pgmap v1800: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:27:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:53 compute-0 nova_compute[243704]: 2025-12-13 04:27:53.836 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:27:54 compute-0 nova_compute[243704]: 2025-12-13 04:27:54.586 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:55 compute-0 ceph-mon[75071]: pgmap v1801: 305 pgs: 305 active+clean; 271 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:27:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 283 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.0 MiB/s wr, 20 op/s
Dec 13 04:27:57 compute-0 ceph-mon[75071]: pgmap v1802: 305 pgs: 305 active+clean; 283 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.0 MiB/s wr, 20 op/s
Dec 13 04:27:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 283 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Dec 13 04:27:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:27:58 compute-0 nova_compute[243704]: 2025-12-13 04:27:58.838 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:58 compute-0 podman[277184]: 2025-12-13 04:27:58.919457389 +0000 UTC m=+0.060109355 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent)
Dec 13 04:27:59 compute-0 ceph-mon[75071]: pgmap v1803: 305 pgs: 305 active+clean; 283 MiB data, 566 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.0 MiB/s wr, 15 op/s
Dec 13 04:27:59 compute-0 nova_compute[243704]: 2025-12-13 04:27:59.589 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:27:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 355 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.9 MiB/s wr, 43 op/s
Dec 13 04:28:00 compute-0 sudo[277204]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:28:00 compute-0 sudo[277204]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:00 compute-0 sudo[277204]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:00 compute-0 sudo[277229]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:28:00 compute-0 sudo[277229]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.006 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.007 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.025 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.141 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.142 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.150 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.151 243708 INFO nova.compute.claims [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:28:01 compute-0 sudo[277229]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.299 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:01 compute-0 sudo[277284]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:28:01 compute-0 sudo[277284]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:01 compute-0 sudo[277284]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:01 compute-0 sudo[277316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:28:01 compute-0 sudo[277316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:01 compute-0 podman[277308]: 2025-12-13 04:28:01.424164457 +0000 UTC m=+0.099937958 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: pgmap v1804: 305 pgs: 305 active+clean; 355 MiB data, 640 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.9 MiB/s wr, 43 op/s
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:28:01 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:28:01 compute-0 podman[277387]: 2025-12-13 04:28:01.726930009 +0000 UTC m=+0.053390283 container create 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:28:01 compute-0 systemd[1]: Started libpod-conmon-93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca.scope.
Dec 13 04:28:01 compute-0 podman[277387]: 2025-12-13 04:28:01.703857182 +0000 UTC m=+0.030317476 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:01 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:01 compute-0 podman[277387]: 2025-12-13 04:28:01.829512398 +0000 UTC m=+0.155972662 container init 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:28:01 compute-0 podman[277387]: 2025-12-13 04:28:01.839259673 +0000 UTC m=+0.165719977 container start 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2131934741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:01 compute-0 podman[277387]: 2025-12-13 04:28:01.844713171 +0000 UTC m=+0.171173465 container attach 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Dec 13 04:28:01 compute-0 systemd[1]: libpod-93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca.scope: Deactivated successfully.
Dec 13 04:28:01 compute-0 nice_leavitt[277404]: 167 167
Dec 13 04:28:01 compute-0 conmon[277404]: conmon 93f0cfe9e8eb00694515 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca.scope/container/memory.events
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.864 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.872 243708 DEBUG nova.compute.provider_tree [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.885 243708 DEBUG nova.scheduler.client.report [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:28:01 compute-0 podman[277411]: 2025-12-13 04:28:01.892653264 +0000 UTC m=+0.031416115 container died 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.904 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.762s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.905 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:28:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-d15bc7987631b31c598e126d2f73e6d5008b1b27a9f306d99e89e5785457ab06-merged.mount: Deactivated successfully.
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.943 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.944 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:28:01 compute-0 podman[277411]: 2025-12-13 04:28:01.944567246 +0000 UTC m=+0.083330057 container remove 93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nice_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:01 compute-0 systemd[1]: libpod-conmon-93f0cfe9e8eb00694515158c8e9888508a1f72237c95442842efbfaf1bcd4eca.scope: Deactivated successfully.
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.962 243708 INFO nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:28:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:01 compute-0 nova_compute[243704]: 2025-12-13 04:28:01.981 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.023 243708 INFO nova.virt.block_device [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Booting with volume 905b1d83-b68c-433b-9850-36fd7598824c at /dev/vda
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.151 243708 DEBUG os_brick.utils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.154498514 +0000 UTC m=+0.052494149 container create 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.166 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.196 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.196 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[93f9fc62-82c1-4ad6-b3ba-5fcb3a400ea4]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.199 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.208 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.208 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1b631ba3-7ebd-4550-943d-3fd3e35f8cc3]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.211 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:02 compute-0 systemd[1]: Started libpod-conmon-446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294.scope.
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.224 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.225 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[3c33cd97-3074-42c7-a412-8d57b91fa993]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.131740585 +0000 UTC m=+0.029736210 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.227 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[80a8258c-dc48-4742-98a8-5b06cd1a8a9c]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.227 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:02 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.260 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.264 243708 DEBUG os_brick.initiator.connectors.lightos [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.264 243708 DEBUG os_brick.initiator.connectors.lightos [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.264 243708 DEBUG os_brick.initiator.connectors.lightos [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.265 243708 DEBUG os_brick.utils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] <== get_connector_properties: return (112ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.265 243708 DEBUG nova.virt.block_device [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating existing volume attachment record: cbdb9be7-5cc6-40e6-9e78-8c1933781c30 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.271260458 +0000 UTC m=+0.169256063 container init 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.280721566 +0000 UTC m=+0.178717171 container start 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.284863548 +0000 UTC m=+0.182859153 container attach 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:28:02 compute-0 nova_compute[243704]: 2025-12-13 04:28:02.297 243708 DEBUG nova.policy [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'deba56fa45214f28a3aab4d031dc4155', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43c4864e9f844459a882a9e3d0fe477b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:28:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2131934741' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:02 compute-0 crazy_wilson[277453]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:28:02 compute-0 crazy_wilson[277453]: --> All data devices are unavailable
Dec 13 04:28:02 compute-0 systemd[1]: libpod-446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294.scope: Deactivated successfully.
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.819918675 +0000 UTC m=+0.717914280 container died 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:28:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad35050e1dfd2a129ef5b4cea6801bb166bf19346b21df2987421d2833f9b93c-merged.mount: Deactivated successfully.
Dec 13 04:28:02 compute-0 podman[277432]: 2025-12-13 04:28:02.87048313 +0000 UTC m=+0.768478735 container remove 446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_wilson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:28:02 compute-0 systemd[1]: libpod-conmon-446d91e50c26694d507cf903165d5ffd85e6d12f636531477c2988001167c294.scope: Deactivated successfully.
Dec 13 04:28:02 compute-0 sudo[277316]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:02 compute-0 sudo[277488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:28:02 compute-0 sudo[277488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:02 compute-0 sudo[277488]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:28:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1587215094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:03 compute-0 sudo[277513]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:28:03 compute-0 sudo[277513]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:03 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:03.081 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:28:03 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:03.082 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.124 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.214 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Successfully created port: 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.331 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.334 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.335 243708 INFO nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Creating image(s)
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.336 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.337 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Ensure instance console log exists: /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.337 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.338 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.339 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.43982818 +0000 UTC m=+0.062969214 container create 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030)
Dec 13 04:28:03 compute-0 systemd[1]: Started libpod-conmon-1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7.scope.
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.410234155 +0000 UTC m=+0.033375289 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:03 compute-0 ceph-mon[75071]: pgmap v1805: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:03 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1587215094' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.536667992 +0000 UTC m=+0.159809016 container init 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.543480848 +0000 UTC m=+0.166621872 container start 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.546545141 +0000 UTC m=+0.169686195 container attach 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:28:03 compute-0 recursing_matsumoto[277566]: 167 167
Dec 13 04:28:03 compute-0 systemd[1]: libpod-1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7.scope: Deactivated successfully.
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.549384018 +0000 UTC m=+0.172525042 container died 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:28:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5b920d90c6e9905a5672a21c7c2d0c27d53b731d47b5430ec7be339696bd8bc-merged.mount: Deactivated successfully.
Dec 13 04:28:03 compute-0 podman[277550]: 2025-12-13 04:28:03.597100726 +0000 UTC m=+0.220241750 container remove 1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_matsumoto, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:28:03 compute-0 systemd[1]: libpod-conmon-1d9fe7a7d461e5452bfa0debbf4a311a3aa090604f395da95f19c98631e791a7.scope: Deactivated successfully.
Dec 13 04:28:03 compute-0 podman[277589]: 2025-12-13 04:28:03.785564869 +0000 UTC m=+0.060761503 container create 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:28:03 compute-0 systemd[1]: Started libpod-conmon-47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43.scope.
Dec 13 04:28:03 compute-0 nova_compute[243704]: 2025-12-13 04:28:03.839 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:03 compute-0 podman[277589]: 2025-12-13 04:28:03.75358243 +0000 UTC m=+0.028779114 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:03 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae9cc35f9e55c10cd9b919e6a319012ac65abb316e4bab654d80f8da3b9ba70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae9cc35f9e55c10cd9b919e6a319012ac65abb316e4bab654d80f8da3b9ba70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae9cc35f9e55c10cd9b919e6a319012ac65abb316e4bab654d80f8da3b9ba70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cae9cc35f9e55c10cd9b919e6a319012ac65abb316e4bab654d80f8da3b9ba70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:03 compute-0 podman[277589]: 2025-12-13 04:28:03.870086587 +0000 UTC m=+0.145283231 container init 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:28:03 compute-0 podman[277589]: 2025-12-13 04:28:03.878405373 +0000 UTC m=+0.153601977 container start 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:28:03 compute-0 podman[277589]: 2025-12-13 04:28:03.882390902 +0000 UTC m=+0.157587516 container attach 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:28:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:04 compute-0 festive_curran[277606]: {
Dec 13 04:28:04 compute-0 festive_curran[277606]:     "0": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:         {
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "devices": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "/dev/loop3"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             ],
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_name": "ceph_lv0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_size": "21470642176",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "name": "ceph_lv0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "tags": {
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_name": "ceph",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.crush_device_class": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.encrypted": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.objectstore": "bluestore",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_id": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.vdo": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.with_tpm": "0"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             },
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "vg_name": "ceph_vg0"
Dec 13 04:28:04 compute-0 festive_curran[277606]:         }
Dec 13 04:28:04 compute-0 festive_curran[277606]:     ],
Dec 13 04:28:04 compute-0 festive_curran[277606]:     "1": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:         {
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "devices": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "/dev/loop4"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             ],
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_name": "ceph_lv1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_size": "21470642176",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "name": "ceph_lv1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "tags": {
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_name": "ceph",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.crush_device_class": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.encrypted": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.objectstore": "bluestore",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_id": "1",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.vdo": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.with_tpm": "0"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             },
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "vg_name": "ceph_vg1"
Dec 13 04:28:04 compute-0 festive_curran[277606]:         }
Dec 13 04:28:04 compute-0 festive_curran[277606]:     ],
Dec 13 04:28:04 compute-0 festive_curran[277606]:     "2": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:         {
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "devices": [
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "/dev/loop5"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             ],
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_name": "ceph_lv2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_size": "21470642176",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "name": "ceph_lv2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "tags": {
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.cluster_name": "ceph",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.crush_device_class": "",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.encrypted": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.objectstore": "bluestore",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osd_id": "2",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.vdo": "0",
Dec 13 04:28:04 compute-0 festive_curran[277606]:                 "ceph.with_tpm": "0"
Dec 13 04:28:04 compute-0 festive_curran[277606]:             },
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "type": "block",
Dec 13 04:28:04 compute-0 festive_curran[277606]:             "vg_name": "ceph_vg2"
Dec 13 04:28:04 compute-0 festive_curran[277606]:         }
Dec 13 04:28:04 compute-0 festive_curran[277606]:     ]
Dec 13 04:28:04 compute-0 festive_curran[277606]: }
Dec 13 04:28:04 compute-0 systemd[1]: libpod-47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43.scope: Deactivated successfully.
Dec 13 04:28:04 compute-0 podman[277589]: 2025-12-13 04:28:04.198466035 +0000 UTC m=+0.473662649 container died 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-cae9cc35f9e55c10cd9b919e6a319012ac65abb316e4bab654d80f8da3b9ba70-merged.mount: Deactivated successfully.
Dec 13 04:28:04 compute-0 podman[277589]: 2025-12-13 04:28:04.247414126 +0000 UTC m=+0.522610720 container remove 47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:28:04 compute-0 systemd[1]: libpod-conmon-47f58d9444b4d22038c9a6527da0bfc295a5abcc69fae2a0d40dd51a96606a43.scope: Deactivated successfully.
Dec 13 04:28:04 compute-0 sudo[277513]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:04 compute-0 sudo[277627]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:28:04 compute-0 sudo[277627]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:04 compute-0 sudo[277627]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:04 compute-0 sudo[277652]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:28:04 compute-0 sudo[277652]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:04 compute-0 nova_compute[243704]: 2025-12-13 04:28:04.591 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.77242274 +0000 UTC m=+0.049393003 container create d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:04 compute-0 systemd[1]: Started libpod-conmon-d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395.scope.
Dec 13 04:28:04 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.752123598 +0000 UTC m=+0.029093871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.846893165 +0000 UTC m=+0.123863408 container init d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.851717047 +0000 UTC m=+0.128687310 container start d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:28:04 compute-0 confident_mcclintock[277706]: 167 167
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.855361195 +0000 UTC m=+0.132331458 container attach d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Dec 13 04:28:04 compute-0 systemd[1]: libpod-d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395.scope: Deactivated successfully.
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.856376323 +0000 UTC m=+0.133346566 container died d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:28:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0630fcf5fea6323b53ebfeb669ed678ebcef0aea998e56c7712a4f6c7043c07-merged.mount: Deactivated successfully.
Dec 13 04:28:04 compute-0 podman[277689]: 2025-12-13 04:28:04.899163767 +0000 UTC m=+0.176134010 container remove d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=confident_mcclintock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:28:04 compute-0 systemd[1]: libpod-conmon-d6f35455a8f1400ee42cd3cce445d038f5ef52fa183bd8c661092fab4fd1d395.scope: Deactivated successfully.
Dec 13 04:28:05 compute-0 podman[277730]: 2025-12-13 04:28:05.066693581 +0000 UTC m=+0.044002297 container create 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:28:05 compute-0 systemd[1]: Started libpod-conmon-279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d.scope.
Dec 13 04:28:05 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51e4a45d0225389be640847bf303016ccf7f3d7f02f09288a62d7494b42c192/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51e4a45d0225389be640847bf303016ccf7f3d7f02f09288a62d7494b42c192/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51e4a45d0225389be640847bf303016ccf7f3d7f02f09288a62d7494b42c192/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:05 compute-0 podman[277730]: 2025-12-13 04:28:05.044812447 +0000 UTC m=+0.022121213 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:28:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f51e4a45d0225389be640847bf303016ccf7f3d7f02f09288a62d7494b42c192/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:05 compute-0 podman[277730]: 2025-12-13 04:28:05.158589839 +0000 UTC m=+0.135898585 container init 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:28:05 compute-0 podman[277730]: 2025-12-13 04:28:05.166063683 +0000 UTC m=+0.143372409 container start 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:28:05 compute-0 podman[277730]: 2025-12-13 04:28:05.171408478 +0000 UTC m=+0.148717194 container attach 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:28:05 compute-0 ceph-mon[75071]: pgmap v1806: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.767 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Successfully updated port: 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.782 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.782 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquired lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.782 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.874 243708 DEBUG nova.compute.manager [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-changed-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.874 243708 DEBUG nova.compute.manager [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Refreshing instance network info cache due to event network-changed-14f285e4-868c-4bf6-b8aa-a7d3339c1c45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:28:05 compute-0 nova_compute[243704]: 2025-12-13 04:28:05.874 243708 DEBUG oslo_concurrency.lockutils [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:05 compute-0 lvm[277825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:28:05 compute-0 lvm[277825]: VG ceph_vg1 finished
Dec 13 04:28:05 compute-0 lvm[277824]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:28:05 compute-0 lvm[277824]: VG ceph_vg0 finished
Dec 13 04:28:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:05 compute-0 lvm[277827]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:28:05 compute-0 lvm[277827]: VG ceph_vg2 finished
Dec 13 04:28:06 compute-0 jovial_bassi[277746]: {}
Dec 13 04:28:06 compute-0 systemd[1]: libpod-279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d.scope: Deactivated successfully.
Dec 13 04:28:06 compute-0 podman[277730]: 2025-12-13 04:28:06.079481247 +0000 UTC m=+1.056789953 container died 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:28:06 compute-0 systemd[1]: libpod-279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d.scope: Consumed 1.518s CPU time.
Dec 13 04:28:06 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:06.086 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-f51e4a45d0225389be640847bf303016ccf7f3d7f02f09288a62d7494b42c192-merged.mount: Deactivated successfully.
Dec 13 04:28:06 compute-0 podman[277730]: 2025-12-13 04:28:06.128724995 +0000 UTC m=+1.106033701 container remove 279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:28:06 compute-0 systemd[1]: libpod-conmon-279f790d7c456a09ad6bbccc6808b265cdd3278c203bf12f2487c8710284be9d.scope: Deactivated successfully.
Dec 13 04:28:06 compute-0 sudo[277652]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:28:06 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:28:06 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:06 compute-0 sudo[277842]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:28:06 compute-0 sudo[277842]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:28:06 compute-0 sudo[277842]: pam_unix(sudo:session): session closed for user root
Dec 13 04:28:06 compute-0 nova_compute[243704]: 2025-12-13 04:28:06.483 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:28:07 compute-0 ceph-mon[75071]: pgmap v1807: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:28:07 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:07 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.755 243708 DEBUG nova.network.neutron [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating instance_info_cache with network_info: [{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.782 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Releasing lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.783 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Instance network_info: |[{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.784 243708 DEBUG oslo_concurrency.lockutils [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.785 243708 DEBUG nova.network.neutron [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Refreshing network info cache for port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.790 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Start _get_guest_xml network_info=[{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7b7905c-6717-4ad4-b7e5-3a462d8f8b93', 'attached_at': '', 'detached_at': '', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'serial': '905b1d83-b68c-433b-9850-36fd7598824c'}, 'disk_bus': 'virtio', 'attachment_id': 'cbdb9be7-5cc6-40e6-9e78-8c1933781c30', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.797 243708 WARNING nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.803 243708 DEBUG nova.virt.libvirt.host [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.804 243708 DEBUG nova.virt.libvirt.host [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.810 243708 DEBUG nova.virt.libvirt.host [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.811 243708 DEBUG nova.virt.libvirt.host [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.812 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.812 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.813 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.813 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.813 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.813 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.814 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.814 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.814 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.814 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.815 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.815 243708 DEBUG nova.virt.hardware [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.843 243708 DEBUG nova.storage.rbd_utils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:07 compute-0 nova_compute[243704]: 2025-12-13 04:28:07.847 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 MiB/s wr, 29 op/s
Dec 13 04:28:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:28:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/306714816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.375 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.511 243708 DEBUG os_brick.encryptors [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Using volume encryption metadata '{'encryption_key_id': '96637950-e516-44dc-befa-668e7f50a674', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'd7b7905c-6717-4ad4-b7e5-3a462d8f8b93', 'attached_at': '', 'detached_at': '', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.513 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.528 243708 DEBUG barbicanclient.v1.secrets [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/96637950-e516-44dc-befa-668e7f50a674 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.529 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.567 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.568 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.598 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.599 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.633 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.634 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.661 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.662 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.688 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.689 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.720 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.720 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.758 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.758 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.779 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.780 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.801 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.802 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.834 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.834 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.841 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.869 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.870 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.891 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.893 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.930 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.930 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.950 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.950 243708 INFO barbicanclient.base [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/96637950-e516-44dc-befa-668e7f50a674
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.954 243708 DEBUG nova.network.neutron [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updated VIF entry in instance network info cache for port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.955 243708 DEBUG nova.network.neutron [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating instance_info_cache with network_info: [{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.973 243708 DEBUG oslo_concurrency.lockutils [req-4d4eb525-ba8f-4794-ba8d-fd243a59b0ab req-b4ce4c80-76a9-4370-8742-d559395b8281 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.983 243708 DEBUG barbicanclient.client [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:08 compute-0 nova_compute[243704]: 2025-12-13 04:28:08.984 243708 DEBUG nova.virt.libvirt.host [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:28:08 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:28:08 compute-0 nova_compute[243704]:     <volume>905b1d83-b68c-433b-9850-36fd7598824c</volume>
Dec 13 04:28:08 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:28:08 compute-0 nova_compute[243704]: </secret>
Dec 13 04:28:08 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.060 243708 DEBUG nova.virt.libvirt.vif [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:27:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2068218761',display_name='tempest-TransferEncryptedVolumeTest-server-2068218761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2068218761',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-s7u0et56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:28:02Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=d7b7905c-6717-4ad4-b7e5-3a462d8f8b93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.060 243708 DEBUG nova.network.os_vif_util [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.061 243708 DEBUG nova.network.os_vif_util [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.064 243708 DEBUG nova.objects.instance [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'pci_devices' on Instance uuid d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.079 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <uuid>d7b7905c-6717-4ad4-b7e5-3a462d8f8b93</uuid>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <name>instance-0000001a</name>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-2068218761</nova:name>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:28:07</nova:creationTime>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:user uuid="deba56fa45214f28a3aab4d031dc4155">tempest-TransferEncryptedVolumeTest-1412293480-project-member</nova:user>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:project uuid="43c4864e9f844459a882a9e3d0fe477b">tempest-TransferEncryptedVolumeTest-1412293480</nova:project>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <nova:port uuid="14f285e4-868c-4bf6-b8aa-a7d3339c1c45">
Dec 13 04:28:09 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <system>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="serial">d7b7905c-6717-4ad4-b7e5-3a462d8f8b93</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="uuid">d7b7905c-6717-4ad4-b7e5-3a462d8f8b93</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </system>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <os>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </os>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <features>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </features>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <serial>905b1d83-b68c-433b-9850-36fd7598824c</serial>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:28:09 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="05a5ad24-0b7f-4664-872c-31f82dd7d5b2"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:63:a3:4f"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <target dev="tap14f285e4-86"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/console.log" append="off"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <video>
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </video>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:28:09 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:28:09 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:28:09 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:28:09 compute-0 nova_compute[243704]: </domain>
Dec 13 04:28:09 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.081 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Preparing to wait for external event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.081 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.081 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.081 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.082 243708 DEBUG nova.virt.libvirt.vif [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:27:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2068218761',display_name='tempest-TransferEncryptedVolumeTest-server-2068218761',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2068218761',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-s7u0et56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:28:02Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=d7b7905c-6717-4ad4-b7e5-3a462d8f8b93,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.083 243708 DEBUG nova.network.os_vif_util [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.083 243708 DEBUG nova.network.os_vif_util [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.084 243708 DEBUG os_vif [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.085 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.086 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.086 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.092 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.093 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap14f285e4-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.093 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap14f285e4-86, col_values=(('external_ids', {'iface-id': '14f285e4-868c-4bf6-b8aa-a7d3339c1c45', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:63:a3:4f', 'vm-uuid': 'd7b7905c-6717-4ad4-b7e5-3a462d8f8b93'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.095 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.098 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:28:09 compute-0 NetworkManager[48899]: <info>  [1765600089.1011] manager: (tap14f285e4-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/130)
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.103 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.104 243708 INFO os_vif [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86')
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.174 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.175 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.175 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No VIF found with MAC fa:16:3e:63:a3:4f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.177 243708 INFO nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Using config drive
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.207 243708 DEBUG nova.storage.rbd_utils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:09 compute-0 ceph-mon[75071]: pgmap v1808: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 MiB/s wr, 29 op/s
Dec 13 04:28:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/306714816' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.503 243708 INFO nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Creating config drive at /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.511 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo9o0x5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.639 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppo9o0x5i" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.667 243708 DEBUG nova.storage.rbd_utils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.670 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.903 243708 DEBUG oslo_concurrency.processutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.904 243708 INFO nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Deleting local config drive /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93/disk.config because it was imported into RBD.
Dec 13 04:28:09 compute-0 kernel: tap14f285e4-86: entered promiscuous mode
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.960 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 ovn_controller[145204]: 2025-12-13T04:28:09Z|00240|binding|INFO|Claiming lport 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 for this chassis.
Dec 13 04:28:09 compute-0 ovn_controller[145204]: 2025-12-13T04:28:09Z|00241|binding|INFO|14f285e4-868c-4bf6-b8aa-a7d3339c1c45: Claiming fa:16:3e:63:a3:4f 10.100.0.7
Dec 13 04:28:09 compute-0 NetworkManager[48899]: <info>  [1765600089.9625] manager: (tap14f285e4-86): new Tun device (/org/freedesktop/NetworkManager/Devices/131)
Dec 13 04:28:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 MiB/s wr, 29 op/s
Dec 13 04:28:09 compute-0 nova_compute[243704]: 2025-12-13 04:28:09.970 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:09 compute-0 systemd-machined[206767]: New machine qemu-26-instance-0000001a.
Dec 13 04:28:09 compute-0 systemd-udevd[277979]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:28:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:09.994 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:a3:4f 10.100.0.7'], port_security=['fa:16:3e:63:a3:4f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd7b7905c-6717-4ad4-b7e5-3a462d8f8b93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0350955e-df3f-494b-94c3-1eba35bfaee3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=14f285e4-868c-4bf6-b8aa-a7d3339c1c45) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:28:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:09.996 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca bound to our chassis
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:09.999 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:10 compute-0 NetworkManager[48899]: <info>  [1765600090.0066] device (tap14f285e4-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:28:10 compute-0 NetworkManager[48899]: <info>  [1765600090.0093] device (tap14f285e4-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.015 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a30a403f-e06c-49a4-a337-7eb075fbfc8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.016 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2920aa7a-a1 in ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.020 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2920aa7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.020 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a79acc-1f58-4abb-a431-a117318a6c38]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.022 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[742ae7cd-a8d0-4f63-b5bd-0a2c3003a07d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.037 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[e3b5ea8c-ae37-47b6-a173-0ab57494199c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 systemd[1]: Started Virtual Machine qemu-26-instance-0000001a.
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.055 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[437e476b-826b-45ed-9b01-743a3bca279d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_controller[145204]: 2025-12-13T04:28:10Z|00242|binding|INFO|Setting lport 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 ovn-installed in OVS
Dec 13 04:28:10 compute-0 ovn_controller[145204]: 2025-12-13T04:28:10Z|00243|binding|INFO|Setting lport 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 up in Southbound
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.060 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.090 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c6a49441-f25c-4da4-a424-120c120d2198]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.096 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[786ece40-7642-4021-b653-a238e4f75ad6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 NetworkManager[48899]: <info>  [1765600090.1002] manager: (tap2920aa7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/132)
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.131 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c4cc8c23-3af4-489a-9467-1c3037859839]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.134 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[a9bdd57c-fd35-4eaf-b926-2f5175b3d56e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 NetworkManager[48899]: <info>  [1765600090.1592] device (tap2920aa7a-a0): carrier: link connected
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.165 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ee0172-f40d-46e2-9e06-451214defa0d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.186 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c0b2d281-a1ad-4701-9014-58d7a994bb1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482883, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278012, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.202 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[77e93716-671d-4068-a815-bd99cbb4e67e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:807b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 482883, 'tstamp': 482883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278013, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.220 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[291ed138-5a73-481d-a2cf-b61376f5b5ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 82], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482883, 'reachable_time': 36436, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278014, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.251 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e477844d-c215-4688-bddf-fbc924ee9806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.319 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4a40bb40-c77c-4907-90f3-79ef421891ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.321 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.321 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.321 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2920aa7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:10 compute-0 kernel: tap2920aa7a-a0: entered promiscuous mode
Dec 13 04:28:10 compute-0 NetworkManager[48899]: <info>  [1765600090.3249] manager: (tap2920aa7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/133)
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.327 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2920aa7a-a0, col_values=(('external_ids', {'iface-id': 'ccd83819-bc00-4ecd-ab1d-315a75379aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.327 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:10 compute-0 ovn_controller[145204]: 2025-12-13T04:28:10Z|00244|binding|INFO|Releasing lport ccd83819-bc00-4ecd-ab1d-315a75379aaa from this chassis (sb_readonly=0)
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.330 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.331 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6c8263fc-e538-4509-bfa7-eef98ca5a7cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.332 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:28:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:10.333 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'env', 'PROCESS_TAG=haproxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2920aa7a-a9cb-45da-a971-38a7ffed2fca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.343 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.677 243708 DEBUG nova.compute.manager [req-51fb03c1-5a3d-4192-bbe2-08f099317cb4 req-85449905-31ad-4240-8c57-edc30bf80f1a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.678 243708 DEBUG oslo_concurrency.lockutils [req-51fb03c1-5a3d-4192-bbe2-08f099317cb4 req-85449905-31ad-4240-8c57-edc30bf80f1a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.679 243708 DEBUG oslo_concurrency.lockutils [req-51fb03c1-5a3d-4192-bbe2-08f099317cb4 req-85449905-31ad-4240-8c57-edc30bf80f1a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.679 243708 DEBUG oslo_concurrency.lockutils [req-51fb03c1-5a3d-4192-bbe2-08f099317cb4 req-85449905-31ad-4240-8c57-edc30bf80f1a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:10 compute-0 nova_compute[243704]: 2025-12-13 04:28:10.679 243708 DEBUG nova.compute.manager [req-51fb03c1-5a3d-4192-bbe2-08f099317cb4 req-85449905-31ad-4240-8c57-edc30bf80f1a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Processing event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:28:10 compute-0 podman[278046]: 2025-12-13 04:28:10.75887977 +0000 UTC m=+0.050323259 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:28:10 compute-0 podman[278046]: 2025-12-13 04:28:10.884099045 +0000 UTC m=+0.175542504 container create 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 04:28:10 compute-0 systemd[1]: Started libpod-conmon-6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a.scope.
Dec 13 04:28:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:28:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/80a729a01d8660648c34af9aa0db2dd3190def22558341a28be57d892ab94a0e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:28:11 compute-0 podman[278046]: 2025-12-13 04:28:11.091848153 +0000 UTC m=+0.383291602 container init 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:28:11 compute-0 podman[278046]: 2025-12-13 04:28:11.10238473 +0000 UTC m=+0.393828179 container start 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:28:11 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [NOTICE]   (278090) : New worker (278103) forked
Dec 13 04:28:11 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [NOTICE]   (278090) : Loading success.
Dec 13 04:28:11 compute-0 ceph-mon[75071]: pgmap v1809: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 8.4 MiB/s wr, 29 op/s
Dec 13 04:28:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 2.5 MiB/s wr, 1 op/s
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.756 243708 DEBUG nova.compute.manager [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.757 243708 DEBUG oslo_concurrency.lockutils [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.757 243708 DEBUG oslo_concurrency.lockutils [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.757 243708 DEBUG oslo_concurrency.lockutils [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.758 243708 DEBUG nova.compute.manager [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] No waiting events found dispatching network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:28:12 compute-0 nova_compute[243704]: 2025-12-13 04:28:12.758 243708 WARNING nova.compute.manager [req-c3f9ade5-cc98-41b9-9cae-77e44f297f29 req-1311b4f8-3586-4be1-8cbc-f2e366ca4993 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received unexpected event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 for instance with vm_state building and task_state spawning.
Dec 13 04:28:13 compute-0 ceph-mon[75071]: pgmap v1810: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 2.5 MiB/s wr, 1 op/s
Dec 13 04:28:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.595 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600093.5944881, d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.597 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] VM Started (Lifecycle Event)
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.602 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.607 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.613 243708 INFO nova.virt.libvirt.driver [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Instance spawned successfully.
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.613 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.631 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.640 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.646 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.647 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.647 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.648 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.648 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.649 243708 DEBUG nova.virt.libvirt.driver [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.672 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.673 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600093.5948365, d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.673 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] VM Paused (Lifecycle Event)
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.690 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.696 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600093.6064038, d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.696 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] VM Resumed (Lifecycle Event)
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.843 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.848 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.852 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:28:13 compute-0 nova_compute[243704]: 2025-12-13 04:28:13.869 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:28:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 255 B/s wr, 5 op/s
Dec 13 04:28:14 compute-0 nova_compute[243704]: 2025-12-13 04:28:14.084 243708 INFO nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Took 10.75 seconds to spawn the instance on the hypervisor.
Dec 13 04:28:14 compute-0 nova_compute[243704]: 2025-12-13 04:28:14.084 243708 DEBUG nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:28:14 compute-0 nova_compute[243704]: 2025-12-13 04:28:14.095 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:14 compute-0 nova_compute[243704]: 2025-12-13 04:28:14.383 243708 INFO nova.compute.manager [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Took 13.32 seconds to build instance.
Dec 13 04:28:14 compute-0 nova_compute[243704]: 2025-12-13 04:28:14.676 243708 DEBUG oslo_concurrency.lockutils [None req-5e8e280b-2b8e-473a-9f89-e99f8dec27e6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.670s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:14 compute-0 ceph-mon[75071]: pgmap v1811: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 255 B/s wr, 5 op/s
Dec 13 04:28:15 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Dec 13 04:28:17 compute-0 ceph-mon[75071]: pgmap v1812: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Dec 13 04:28:17 compute-0 podman[278119]: 2025-12-13 04:28:17.956595764 +0000 UTC m=+0.098377747 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:28:17 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Dec 13 04:28:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:18 compute-0 nova_compute[243704]: 2025-12-13 04:28:18.983 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:19 compute-0 nova_compute[243704]: 2025-12-13 04:28:19.097 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:19 compute-0 ceph-mon[75071]: pgmap v1813: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 12 KiB/s wr, 61 op/s
Dec 13 04:28:19 compute-0 NetworkManager[48899]: <info>  [1765600099.8414] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/134)
Dec 13 04:28:19 compute-0 NetworkManager[48899]: <info>  [1765600099.8423] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/135)
Dec 13 04:28:19 compute-0 nova_compute[243704]: 2025-12-13 04:28:19.846 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:19 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:20 compute-0 ovn_controller[145204]: 2025-12-13T04:28:20Z|00245|binding|INFO|Releasing lport ccd83819-bc00-4ecd-ab1d-315a75379aaa from this chassis (sb_readonly=0)
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.076 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.319 243708 DEBUG nova.compute.manager [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-changed-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.319 243708 DEBUG nova.compute.manager [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Refreshing instance network info cache due to event network-changed-14f285e4-868c-4bf6-b8aa-a7d3339c1c45. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.320 243708 DEBUG oslo_concurrency.lockutils [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.320 243708 DEBUG oslo_concurrency.lockutils [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:20 compute-0 nova_compute[243704]: 2025-12-13 04:28:20.320 243708 DEBUG nova.network.neutron [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Refreshing network info cache for port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:28:21 compute-0 nova_compute[243704]: 2025-12-13 04:28:21.326 243708 DEBUG nova.network.neutron [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updated VIF entry in instance network info cache for port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:28:21 compute-0 nova_compute[243704]: 2025-12-13 04:28:21.329 243708 DEBUG nova.network.neutron [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating instance_info_cache with network_info: [{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:21 compute-0 nova_compute[243704]: 2025-12-13 04:28:21.406 243708 DEBUG oslo_concurrency.lockutils [req-5bd4a59a-d71a-4c2b-ad03-09fec8b8b5ab req-f456fd9a-f161-45c9-a0d3-c2da1179098f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:21 compute-0 ceph-mon[75071]: pgmap v1814: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:28:21 compute-0 nova_compute[243704]: 2025-12-13 04:28:21.887 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:21 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:28:22 compute-0 nova_compute[243704]: 2025-12-13 04:28:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:22 compute-0 nova_compute[243704]: 2025-12-13 04:28:22.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:28:22 compute-0 nova_compute[243704]: 2025-12-13 04:28:22.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:28:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:23 compute-0 ceph-mon[75071]: pgmap v1815: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:28:23 compute-0 nova_compute[243704]: 2025-12-13 04:28:23.727 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:23 compute-0 nova_compute[243704]: 2025-12-13 04:28:23.728 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:23 compute-0 nova_compute[243704]: 2025-12-13 04:28:23.728 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:28:23 compute-0 nova_compute[243704]: 2025-12-13 04:28:23.729 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:28:23 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 13 04:28:23 compute-0 nova_compute[243704]: 2025-12-13 04:28:23.987 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:24 compute-0 nova_compute[243704]: 2025-12-13 04:28:24.099 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:25 compute-0 ceph-mon[75071]: pgmap v1816: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 13 04:28:25 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.978 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating instance_info_cache with network_info: [{"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.990 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.990 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.991 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.991 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:25 compute-0 nova_compute[243704]: 2025-12-13 04:28:25.991 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.011 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.011 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.011 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.011 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.012 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2295959659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.637 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.717 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.718 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.916 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.918 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4093MB free_disk=59.98791501112282GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.918 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:26 compute-0 nova_compute[243704]: 2025-12-13 04:28:26.919 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.020 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:28:27 compute-0 ovn_controller[145204]: 2025-12-13T04:28:27Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:63:a3:4f 10.100.0.7
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.021 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:28:27 compute-0 ovn_controller[145204]: 2025-12-13T04:28:27Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:63:a3:4f 10.100.0.7
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.021 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.074 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:27 compute-0 ceph-mon[75071]: pgmap v1817: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 04:28:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2295959659' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487539375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.949 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.876s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.958 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:28:27 compute-0 nova_compute[243704]: 2025-12-13 04:28:27.971 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:28:27 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 15 op/s
Dec 13 04:28:28 compute-0 nova_compute[243704]: 2025-12-13 04:28:28.299 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:28:28 compute-0 nova_compute[243704]: 2025-12-13 04:28:28.300 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:28 compute-0 nova_compute[243704]: 2025-12-13 04:28:28.989 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:29 compute-0 nova_compute[243704]: 2025-12-13 04:28:29.101 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:29 compute-0 nova_compute[243704]: 2025-12-13 04:28:29.185 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:29 compute-0 nova_compute[243704]: 2025-12-13 04:28:29.186 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:29 compute-0 podman[278191]: 2025-12-13 04:28:29.897187737 +0000 UTC m=+0.049346843 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:28:29 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 385 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 687 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Dec 13 04:28:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1487539375' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:30 compute-0 nova_compute[243704]: 2025-12-13 04:28:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:31 compute-0 ceph-mon[75071]: pgmap v1818: 305 pgs: 305 active+clean; 385 MiB data, 674 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 15 op/s
Dec 13 04:28:31 compute-0 ceph-mon[75071]: pgmap v1819: 305 pgs: 305 active+clean; 385 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 687 KiB/s rd, 1.7 MiB/s wr, 51 op/s
Dec 13 04:28:31 compute-0 podman[278210]: 2025-12-13 04:28:31.951753627 +0000 UTC m=+0.100924515 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd)
Dec 13 04:28:31 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 389 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 13 04:28:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:33.665074) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600113665163, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1555, "num_deletes": 260, "total_data_size": 2377302, "memory_usage": 2411696, "flush_reason": "Manual Compaction"}
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600113826988, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 2330307, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34731, "largest_seqno": 36285, "table_properties": {"data_size": 2322926, "index_size": 4324, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15550, "raw_average_key_size": 20, "raw_value_size": 2308060, "raw_average_value_size": 3001, "num_data_blocks": 192, "num_entries": 769, "num_filter_entries": 769, "num_deletions": 260, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765599972, "oldest_key_time": 1765599972, "file_creation_time": 1765600113, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 162032 microseconds, and 5859 cpu microseconds.
Dec 13 04:28:33 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:28:33 compute-0 nova_compute[243704]: 2025-12-13 04:28:33.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:33 compute-0 nova_compute[243704]: 2025-12-13 04:28:33.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:28:33 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 406 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 3.0 MiB/s wr, 53 op/s
Dec 13 04:28:33 compute-0 nova_compute[243704]: 2025-12-13 04:28:33.991 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:34 compute-0 nova_compute[243704]: 2025-12-13 04:28:34.102 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:34 compute-0 ceph-mon[75071]: pgmap v1820: 305 pgs: 305 active+clean; 389 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 493 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:33.827097) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 2330307 bytes OK
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:33.827129) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.352244) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.352283) EVENT_LOG_v1 {"time_micros": 1765600114352272, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.352311) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 2370405, prev total WAL file size 2371714, number of live WAL files 2.
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.353449) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031303130' seq:72057594037927935, type:22 .. '6C6F676D0031323631' seq:0, type:0; will stop at (end)
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(2275KB)], [71(9647KB)]
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600114353539, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 12209701, "oldest_snapshot_seqno": -1}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6875 keys, 12062448 bytes, temperature: kUnknown
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600114671547, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 12062448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12009218, "index_size": 34942, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17221, "raw_key_size": 173134, "raw_average_key_size": 25, "raw_value_size": 11878536, "raw_average_value_size": 1727, "num_data_blocks": 1403, "num_entries": 6875, "num_filter_entries": 6875, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765600114, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.672005) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 12062448 bytes
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.769030) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 38.4 rd, 37.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 9.4 +0.0 blob) out(11.5 +0.0 blob), read-write-amplify(10.4) write-amplify(5.2) OK, records in: 7407, records dropped: 532 output_compression: NoCompression
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.769118) EVENT_LOG_v1 {"time_micros": 1765600114769099, "job": 40, "event": "compaction_finished", "compaction_time_micros": 318159, "compaction_time_cpu_micros": 48558, "output_level": 6, "num_output_files": 1, "total_output_size": 12062448, "num_input_records": 7407, "num_output_records": 6875, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600114769914, "job": 40, "event": "table_file_deletion", "file_number": 73}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600114773103, "job": 40, "event": "table_file_deletion", "file_number": 71}
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.353279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.773179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.773184) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.773186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.773188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:28:34.773190) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:28:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:35.105 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:35.106 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:35.107 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:35 compute-0 ceph-mon[75071]: pgmap v1821: 305 pgs: 305 active+clean; 406 MiB data, 702 MiB used, 59 GiB / 60 GiB avail; 518 KiB/s rd, 3.0 MiB/s wr, 53 op/s
Dec 13 04:28:35 compute-0 nova_compute[243704]: 2025-12-13 04:28:35.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:28:35 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 427 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 4.0 MiB/s wr, 67 op/s
Dec 13 04:28:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:28:35 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.481 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.481 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.482 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.483 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.483 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.486 243708 INFO nova.compute.manager [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Terminating instance
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.488 243708 DEBUG nova.compute.manager [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:28:36 compute-0 kernel: tap14f285e4-86 (unregistering): left promiscuous mode
Dec 13 04:28:36 compute-0 NetworkManager[48899]: <info>  [1765600116.6684] device (tap14f285e4-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:28:36 compute-0 ovn_controller[145204]: 2025-12-13T04:28:36Z|00246|binding|INFO|Releasing lport 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 from this chassis (sb_readonly=0)
Dec 13 04:28:36 compute-0 ovn_controller[145204]: 2025-12-13T04:28:36Z|00247|binding|INFO|Setting lport 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 down in Southbound
Dec 13 04:28:36 compute-0 ovn_controller[145204]: 2025-12-13T04:28:36Z|00248|binding|INFO|Removing iface tap14f285e4-86 ovn-installed in OVS
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.677 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.685 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:63:a3:4f 10.100.0.7'], port_security=['fa:16:3e:63:a3:4f 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd7b7905c-6717-4ad4-b7e5-3a462d8f8b93', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0350955e-df3f-494b-94c3-1eba35bfaee3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.246'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=14f285e4-868c-4bf6-b8aa-a7d3339c1c45) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.686 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 14f285e4-868c-4bf6-b8aa-a7d3339c1c45 in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca unbound from our chassis
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.687 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.688 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b543e479-2647-4b3d-bdc0-165e6f05ffdb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.688 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace which is not needed anymore
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.693 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Dec 13 04:28:36 compute-0 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d0000001a.scope: Consumed 16.509s CPU time.
Dec 13 04:28:36 compute-0 systemd-machined[206767]: Machine qemu-26-instance-0000001a terminated.
Dec 13 04:28:36 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [NOTICE]   (278090) : haproxy version is 2.8.14-c23fe91
Dec 13 04:28:36 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [NOTICE]   (278090) : path to executable is /usr/sbin/haproxy
Dec 13 04:28:36 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [WARNING]  (278090) : Exiting Master process...
Dec 13 04:28:36 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [ALERT]    (278090) : Current worker (278103) exited with code 143 (Terminated)
Dec 13 04:28:36 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278061]: [WARNING]  (278090) : All workers exited. Exiting... (0)
Dec 13 04:28:36 compute-0 systemd[1]: libpod-6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a.scope: Deactivated successfully.
Dec 13 04:28:36 compute-0 podman[278253]: 2025-12-13 04:28:36.82642349 +0000 UTC m=+0.047026369 container died 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a-userdata-shm.mount: Deactivated successfully.
Dec 13 04:28:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-80a729a01d8660648c34af9aa0db2dd3190def22558341a28be57d892ab94a0e-merged.mount: Deactivated successfully.
Dec 13 04:28:36 compute-0 podman[278253]: 2025-12-13 04:28:36.869072251 +0000 UTC m=+0.089675130 container cleanup 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:28:36 compute-0 systemd[1]: libpod-conmon-6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a.scope: Deactivated successfully.
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.915 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.923 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.939 243708 INFO nova.virt.libvirt.driver [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Instance destroyed successfully.
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.940 243708 DEBUG nova.objects.instance [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'resources' on Instance uuid d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:28:36 compute-0 podman[278281]: 2025-12-13 04:28:36.943155074 +0000 UTC m=+0.050882474 container remove 6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.949 243708 DEBUG nova.compute.manager [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-unplugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.951 243708 DEBUG oslo_concurrency.lockutils [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.951 243708 DEBUG oslo_concurrency.lockutils [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.951 243708 DEBUG oslo_concurrency.lockutils [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.951 243708 DEBUG nova.compute.manager [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] No waiting events found dispatching network-vif-unplugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.952 243708 DEBUG nova.compute.manager [req-172a8f1b-cb9e-4dee-a6b3-f3d38dff3abf req-5093ab6c-a83a-40ff-9505-47d8b668cd79 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-unplugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.952 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8656763f-24c9-400d-9a8b-c32f72bbfa7f]: (4, ('Sat Dec 13 04:28:36 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a)\n6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a\nSat Dec 13 04:28:36 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a)\n6abdaaedd5bff0dc5b0c30695ce92799df3a815e3b7044eae35952be3877050a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.953 243708 DEBUG nova.virt.libvirt.vif [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:27:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-2068218761',display_name='tempest-TransferEncryptedVolumeTest-server-2068218761',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-2068218761',id=26,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:28:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-s7u0et56',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:28:14Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=d7b7905c-6717-4ad4-b7e5-3a462d8f8b93,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.954 243708 DEBUG nova.network.os_vif_util [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "address": "fa:16:3e:63:a3:4f", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap14f285e4-86", "ovs_interfaceid": "14f285e4-868c-4bf6-b8aa-a7d3339c1c45", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.954 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7a683f-a6c0-4163-96c2-729d3affd13d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.954 243708 DEBUG nova.network.os_vif_util [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.955 243708 DEBUG os_vif [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.956 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.958 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 kernel: tap2920aa7a-a0: left promiscuous mode
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.959 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap14f285e4-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.960 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.961 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.963 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.976 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.977 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:36 compute-0 nova_compute[243704]: 2025-12-13 04:28:36.980 243708 INFO os_vif [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:63:a3:4f,bridge_name='br-int',has_traffic_filtering=True,id=14f285e4-868c-4bf6-b8aa-a7d3339c1c45,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap14f285e4-86')
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.979 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0a24c049-2df6-4ea4-85a6-6558ad3381be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.992 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d4a9f1-867d-4b5e-8d64-e5558dfde1cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:36 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:36.994 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[5aaf5722-369e-46c7-ac6f-ce3231ce7471]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:37.014 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[df52a4ff-235e-4287-91ed-31cd79a51f7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 482876, 'reachable_time': 26750, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278312, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:37.018 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:28:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:37.018 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[608aefd8-a79b-4fa2-9e51-37d2ae3cf99e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:37 compute-0 systemd[1]: run-netns-ovnmeta\x2d2920aa7a\x2da9cb\x2d45da\x2da971\x2d38a7ffed2fca.mount: Deactivated successfully.
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.131 243708 INFO nova.virt.libvirt.driver [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Deleting instance files /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_del
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.133 243708 INFO nova.virt.libvirt.driver [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Deletion of /var/lib/nova/instances/d7b7905c-6717-4ad4-b7e5-3a462d8f8b93_del complete
Dec 13 04:28:37 compute-0 ceph-mon[75071]: pgmap v1822: 305 pgs: 305 active+clean; 427 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 540 KiB/s rd, 4.0 MiB/s wr, 67 op/s
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.225 243708 INFO nova.compute.manager [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Took 0.74 seconds to destroy the instance on the hypervisor.
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.227 243708 DEBUG oslo.service.loopingcall [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.228 243708 DEBUG nova.compute.manager [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:28:37 compute-0 nova_compute[243704]: 2025-12-13 04:28:37.228 243708 DEBUG nova.network.neutron [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:28:37 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 427 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 4.0 MiB/s wr, 63 op/s
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.232 243708 DEBUG nova.network.neutron [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.244 243708 INFO nova.compute.manager [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Took 1.02 seconds to deallocate network for instance.
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.464 243708 INFO nova.compute.manager [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Took 0.22 seconds to detach 1 volumes for instance.
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.537 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.538 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.585 243708 DEBUG oslo_concurrency.processutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:38 compute-0 nova_compute[243704]: 2025-12-13 04:28:38.992 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.041 243708 DEBUG nova.compute.manager [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.042 243708 DEBUG oslo_concurrency.lockutils [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.042 243708 DEBUG oslo_concurrency.lockutils [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.043 243708 DEBUG oslo_concurrency.lockutils [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.043 243708 DEBUG nova.compute.manager [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] No waiting events found dispatching network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.043 243708 WARNING nova.compute.manager [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received unexpected event network-vif-plugged-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 for instance with vm_state deleted and task_state None.
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.044 243708 DEBUG nova.compute.manager [req-5edffa38-d172-4d74-a035-d3564031d7b1 req-dfff210b-f2b3-48a8-99ae-875aa901c31e 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Received event network-vif-deleted-14f285e4-868c-4bf6-b8aa-a7d3339c1c45 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305799649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.128 243708 DEBUG oslo_concurrency.processutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.136 243708 DEBUG nova.compute.provider_tree [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.148 243708 DEBUG nova.scheduler.client.report [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:28:39 compute-0 ceph-mon[75071]: pgmap v1823: 305 pgs: 305 active+clean; 427 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 426 KiB/s rd, 4.0 MiB/s wr, 63 op/s
Dec 13 04:28:39 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2305799649' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.175 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.637s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.209 243708 INFO nova.scheduler.client.report [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Deleted allocations for instance d7b7905c-6717-4ad4-b7e5-3a462d8f8b93
Dec 13 04:28:39 compute-0 nova_compute[243704]: 2025-12-13 04:28:39.272 243708 DEBUG oslo_concurrency.lockutils [None req-23941c09-f86f-43e4-8ed3-3bba7ffb3647 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "d7b7905c-6717-4ad4-b7e5-3a462d8f8b93" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:39 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 5.8 MiB/s wr, 87 op/s
Dec 13 04:28:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:28:40
Dec 13 04:28:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:28:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:28:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root']
Dec 13 04:28:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:28:41 compute-0 ceph-mon[75071]: pgmap v1824: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 442 KiB/s rd, 5.8 MiB/s wr, 87 op/s
Dec 13 04:28:41 compute-0 nova_compute[243704]: 2025-12-13 04:28:41.960 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:41 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 4.1 MiB/s wr, 52 op/s
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:28:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:28:43 compute-0 ceph-mon[75071]: pgmap v1825: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 197 KiB/s rd, 4.1 MiB/s wr, 52 op/s
Dec 13 04:28:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:43 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 3.7 MiB/s wr, 43 op/s
Dec 13 04:28:43 compute-0 nova_compute[243704]: 2025-12-13 04:28:43.994 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:45 compute-0 ceph-mon[75071]: pgmap v1826: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 63 KiB/s rd, 3.7 MiB/s wr, 43 op/s
Dec 13 04:28:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:28:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/908508156' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:28:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:28:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/908508156' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:28:45 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Dec 13 04:28:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/908508156' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:28:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/908508156' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:28:46 compute-0 nova_compute[243704]: 2025-12-13 04:28:46.962 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:47 compute-0 ceph-mon[75071]: pgmap v1827: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 2.8 MiB/s wr, 39 op/s
Dec 13 04:28:47 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 13 04:28:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:48 compute-0 podman[278348]: 2025-12-13 04:28:48.958704316 +0000 UTC m=+0.098505529 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec 13 04:28:48 compute-0 nova_compute[243704]: 2025-12-13 04:28:48.996 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:49 compute-0 ceph-mon[75071]: pgmap v1828: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 13 04:28:49 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.866 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.866 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.879 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.953 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.953 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.964 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:28:50 compute-0 nova_compute[243704]: 2025-12-13 04:28:50.964 243708 INFO nova.compute.claims [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.179 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:51 compute-0 ceph-mon[75071]: pgmap v1829: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Dec 13 04:28:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:28:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2440239036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.763 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.769 243708 DEBUG nova.compute.provider_tree [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.786 243708 DEBUG nova.scheduler.client.report [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.808 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.855s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.809 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.843 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.843 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.857 243708 INFO nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.867 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.907 243708 INFO nova.virt.block_device [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Booting with volume 905b1d83-b68c-433b-9850-36fd7598824c at /dev/vda
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.937 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765600116.9365537, d7b7905c-6717-4ad4-b7e5-3a462d8f8b93 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.937 243708 INFO nova.compute.manager [-] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] VM Stopped (Lifecycle Event)
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.953 243708 DEBUG nova.compute.manager [None req-7637a1e3-a5bb-435e-b4e0-e521138c9520 - - - - - -] [instance: d7b7905c-6717-4ad4-b7e5-3a462d8f8b93] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:28:51 compute-0 nova_compute[243704]: 2025-12-13 04:28:51.963 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:51 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.033 243708 DEBUG os_brick.utils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.034 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.046 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.047 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4b83d922-3d3f-4ab5-abd7-1aa0971b10ab]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.048 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.056 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.056 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[663bd966-fd28-415e-a0b4-af9ed649238d]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.058 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.066 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.067 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[17bd6428-5bca-4e08-bfc0-daa2976c7190]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.069 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[398982ab-6d6a-423b-a5ff-c617095fa8eb]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.070 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.103 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "nvme version" returned: 0 in 0.033s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.109 243708 DEBUG os_brick.initiator.connectors.lightos [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.110 243708 DEBUG os_brick.initiator.connectors.lightos [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.110 243708 DEBUG os_brick.initiator.connectors.lightos [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.112 243708 DEBUG os_brick.utils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] <== get_connector_properties: return (78ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.113 243708 DEBUG nova.virt.block_device [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating existing volume attachment record: f1b29b28-afbe-4e32-a678-4b7f0e6ff6f6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:28:52 compute-0 nova_compute[243704]: 2025-12-13 04:28:52.347 243708 DEBUG nova.policy [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'deba56fa45214f28a3aab4d031dc4155', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43c4864e9f844459a882a9e3d0fe477b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:28:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2440239036' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.7070774446748125e-06 of space, bias 1.0, pg target 0.0011121232334024437 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005453255397466009 of space, bias 1.0, pg target 1.6359766192398026 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4757992594846177e-06 of space, bias 1.0, pg target 0.0007402639785859007 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.1993440918494864 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3115272986482904e-06 of space, bias 4.0, pg target 0.0015685866491833554 quantized to 16 (current 16)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:28:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 04:28:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:28:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1614862031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.026 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Successfully created port: f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.106 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.107 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.108 243708 INFO nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Creating image(s)
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.108 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.109 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Ensure instance console log exists: /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.109 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.109 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.110 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:53 compute-0 ceph-mon[75071]: pgmap v1830: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 511 B/s rd, 4.7 KiB/s wr, 1 op/s
Dec 13 04:28:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1614862031' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.800 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Successfully updated port: f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.818 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.818 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquired lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.818 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.871 243708 DEBUG nova.compute.manager [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-changed-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.871 243708 DEBUG nova.compute.manager [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Refreshing instance network info cache due to event network-changed-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:28:53 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.872 243708 DEBUG oslo_concurrency.lockutils [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:28:53 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1831: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:54 compute-0 nova_compute[243704]: 2025-12-13 04:28:53.999 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:54 compute-0 nova_compute[243704]: 2025-12-13 04:28:54.756 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:28:55 compute-0 ceph-mon[75071]: pgmap v1831: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:55 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:56 compute-0 nova_compute[243704]: 2025-12-13 04:28:56.964 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:56 compute-0 nova_compute[243704]: 2025-12-13 04:28:56.993 243708 DEBUG nova.network.neutron [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating instance_info_cache with network_info: [{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.008 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Releasing lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.008 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Instance network_info: |[{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.009 243708 DEBUG oslo_concurrency.lockutils [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.009 243708 DEBUG nova.network.neutron [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Refreshing network info cache for port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.013 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Start _get_guest_xml network_info=[{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4', 'attached_at': '', 'detached_at': '', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'serial': '905b1d83-b68c-433b-9850-36fd7598824c'}, 'disk_bus': 'virtio', 'attachment_id': 'f1b29b28-afbe-4e32-a678-4b7f0e6ff6f6', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.018 243708 WARNING nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.027 243708 DEBUG nova.virt.libvirt.host [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.027 243708 DEBUG nova.virt.libvirt.host [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.033 243708 DEBUG nova.virt.libvirt.host [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.033 243708 DEBUG nova.virt.libvirt.host [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.034 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.034 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.035 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.035 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.035 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.036 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.036 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.036 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.037 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.037 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.037 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.037 243708 DEBUG nova.virt.hardware [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.067 243708 DEBUG nova.storage.rbd_utils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.072 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:57 compute-0 ceph-mon[75071]: pgmap v1832: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:28:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2332791517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.638 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.782 243708 DEBUG os_brick.encryptors [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Using volume encryption metadata '{'encryption_key_id': 'fc198392-e1f5-4196-aff7-b9237274efaa', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': 'bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4', 'attached_at': '', 'detached_at': '', 'volume_id': '905b1d83-b68c-433b-9850-36fd7598824c', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.786 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.807 243708 DEBUG barbicanclient.v1.secrets [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/fc198392-e1f5-4196-aff7-b9237274efaa get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.808 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.858 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.859 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.899 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:57 compute-0 nova_compute[243704]: 2025-12-13 04:28:57.900 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:57 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.022 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.022 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.182 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.183 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.207 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.207 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.238 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.238 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.273 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.274 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.298 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.298 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.322 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.323 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.430 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.431 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.458 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.459 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2332791517' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.507 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.508 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.521 243708 DEBUG nova.network.neutron [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updated VIF entry in instance network info cache for port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.522 243708 DEBUG nova.network.neutron [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating instance_info_cache with network_info: [{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.535 243708 DEBUG oslo_concurrency.lockutils [req-7247f441-5121-4ea9-aeee-aa15024a71c8 req-2a73d3d1-ff87-4407-9637-a8ef3339cce7 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.541 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.541 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.568 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.568 243708 INFO barbicanclient.base [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/fc198392-e1f5-4196-aff7-b9237274efaa
Dec 13 04:28:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.591 243708 DEBUG barbicanclient.client [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.593 243708 DEBUG nova.virt.libvirt.host [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <volume>905b1d83-b68c-433b-9850-36fd7598824c</volume>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:28:58 compute-0 nova_compute[243704]: </secret>
Dec 13 04:28:58 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.623 243708 DEBUG nova.virt.libvirt.vif [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-252007343',display_name='tempest-TransferEncryptedVolumeTest-server-252007343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-252007343',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-i09f1p1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:28:51Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.624 243708 DEBUG nova.network.os_vif_util [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.625 243708 DEBUG nova.network.os_vif_util [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.628 243708 DEBUG nova.objects.instance [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'pci_devices' on Instance uuid bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.641 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <uuid>bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4</uuid>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <name>instance-0000001b</name>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-252007343</nova:name>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:28:57</nova:creationTime>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:user uuid="deba56fa45214f28a3aab4d031dc4155">tempest-TransferEncryptedVolumeTest-1412293480-project-member</nova:user>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:project uuid="43c4864e9f844459a882a9e3d0fe477b">tempest-TransferEncryptedVolumeTest-1412293480</nova:project>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <nova:port uuid="f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c">
Dec 13 04:28:58 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <system>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="serial">bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="uuid">bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </system>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <os>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </os>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <features>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </features>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </source>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-905b1d83-b68c-433b-9850-36fd7598824c">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </source>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <serial>905b1d83-b68c-433b-9850-36fd7598824c</serial>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:28:58 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="d2ea5074-b8ff-4ca6-95b3-4d5cf2809709"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:b9:74:3b"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <target dev="tapf08ebcd8-fc"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/console.log" append="off"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <video>
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </video>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:28:58 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:28:58 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:28:58 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:28:58 compute-0 nova_compute[243704]: </domain>
Dec 13 04:28:58 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.643 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Preparing to wait for external event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.644 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.644 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.645 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.646 243708 DEBUG nova.virt.libvirt.vif [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-252007343',display_name='tempest-TransferEncryptedVolumeTest-server-252007343',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-252007343',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-i09f1p1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:28:51Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.647 243708 DEBUG nova.network.os_vif_util [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.648 243708 DEBUG nova.network.os_vif_util [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.649 243708 DEBUG os_vif [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.650 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.651 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.651 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.655 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.656 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf08ebcd8-fc, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.656 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf08ebcd8-fc, col_values=(('external_ids', {'iface-id': 'f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b9:74:3b', 'vm-uuid': 'bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:58 compute-0 NetworkManager[48899]: <info>  [1765600138.6592] manager: (tapf08ebcd8-fc): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/136)
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.661 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.663 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.665 243708 INFO os_vif [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc')
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.707 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.708 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.708 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No VIF found with MAC fa:16:3e:b9:74:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.709 243708 INFO nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Using config drive
Dec 13 04:28:58 compute-0 nova_compute[243704]: 2025-12-13 04:28:58.739 243708 DEBUG nova.storage.rbd_utils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.000 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.109 243708 INFO nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Creating config drive at /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.118 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30tbsds9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.269 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp30tbsds9" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.303 243708 DEBUG nova.storage.rbd_utils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.307 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.444 243708 DEBUG oslo_concurrency.processutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.446 243708 INFO nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Deleting local config drive /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4/disk.config because it was imported into RBD.
Dec 13 04:28:59 compute-0 kernel: tapf08ebcd8-fc: entered promiscuous mode
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.4989] manager: (tapf08ebcd8-fc): new Tun device (/org/freedesktop/NetworkManager/Devices/137)
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.500 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 ovn_controller[145204]: 2025-12-13T04:28:59Z|00249|binding|INFO|Claiming lport f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c for this chassis.
Dec 13 04:28:59 compute-0 ovn_controller[145204]: 2025-12-13T04:28:59Z|00250|binding|INFO|f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c: Claiming fa:16:3e:b9:74:3b 10.100.0.6
Dec 13 04:28:59 compute-0 ceph-mon[75071]: pgmap v1833: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.503 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.513 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:74:3b 10.100.0.6'], port_security=['fa:16:3e:b9:74:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0350955e-df3f-494b-94c3-1eba35bfaee3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:28:59 compute-0 ovn_controller[145204]: 2025-12-13T04:28:59Z|00251|binding|INFO|Setting lport f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c ovn-installed in OVS
Dec 13 04:28:59 compute-0 ovn_controller[145204]: 2025-12-13T04:28:59Z|00252|binding|INFO|Setting lport f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c up in Southbound
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.516 154842 INFO neutron.agent.ovn.metadata.agent [-] Port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca bound to our chassis
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.517 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.521 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:59 compute-0 systemd-udevd[278513]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.5368] device (tapf08ebcd8-fc): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.5375] device (tapf08ebcd8-fc): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.538 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4d0d6b-335b-458b-bd06-623b46e176fd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.540 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2920aa7a-a1 in ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.543 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2920aa7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.543 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[64dd5fa2-cc6d-4a4c-9532-f99f593d578a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 systemd-machined[206767]: New machine qemu-27-instance-0000001b.
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.544 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[13dda1a2-262b-47f9-881b-2e3f3b9953c6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 systemd[1]: Started Virtual Machine qemu-27-instance-0000001b.
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.555 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[a60f89e2-fea5-444f-add8-14496a8b9c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.570 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b630098b-37b7-492b-8b5d-1e2524b37f19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.602 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[74aeacd7-074d-44f0-b2d1-f163502486b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.6089] manager: (tap2920aa7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/138)
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.608 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[7ec1f46d-b800-4ca2-a379-f7420e1c7852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.641 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[b2508aee-8015-46ea-9ba7-797ad5062749]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.644 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff473bc-1c35-449f-ac20-d731661781d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.6685] device (tap2920aa7a-a0): carrier: link connected
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.675 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[6d80b090-b2e3-45c9-9e28-a8f20d2da8d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.690 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fc9d623b-b2bd-48cf-9215-b1c9fb0d9143]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487834, 'reachable_time': 39316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278549, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.707 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[375e7293-ca21-4cad-ad9b-c722706f3592]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:807b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 487834, 'tstamp': 487834}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278550, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.727 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[1958e4ca-7daf-47b7-b107-24b5524e7cec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 85], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487834, 'reachable_time': 39316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278551, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.741 243708 DEBUG nova.compute.manager [req-1f8c7680-39bb-4733-bae1-ed36f3af037e req-9fc2f15f-6cbf-4a9d-acc8-54b845e74d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.742 243708 DEBUG oslo_concurrency.lockutils [req-1f8c7680-39bb-4733-bae1-ed36f3af037e req-9fc2f15f-6cbf-4a9d-acc8-54b845e74d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.742 243708 DEBUG oslo_concurrency.lockutils [req-1f8c7680-39bb-4733-bae1-ed36f3af037e req-9fc2f15f-6cbf-4a9d-acc8-54b845e74d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.743 243708 DEBUG oslo_concurrency.lockutils [req-1f8c7680-39bb-4733-bae1-ed36f3af037e req-9fc2f15f-6cbf-4a9d-acc8-54b845e74d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.743 243708 DEBUG nova.compute.manager [req-1f8c7680-39bb-4733-bae1-ed36f3af037e req-9fc2f15f-6cbf-4a9d-acc8-54b845e74d0f 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Processing event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.761 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ad5f12-bc79-4c1f-bac8-98c7381cc859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.816 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e4956e7c-09f3-4958-850c-a6a26a01f53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.818 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.819 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.819 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2920aa7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:59 compute-0 NetworkManager[48899]: <info>  [1765600139.8237] manager: (tap2920aa7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Dec 13 04:28:59 compute-0 kernel: tap2920aa7a-a0: entered promiscuous mode
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.823 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.825 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2920aa7a-a0, col_values=(('external_ids', {'iface-id': 'ccd83819-bc00-4ecd-ab1d-315a75379aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:28:59 compute-0 ovn_controller[145204]: 2025-12-13T04:28:59Z|00253|binding|INFO|Releasing lport ccd83819-bc00-4ecd-ab1d-315a75379aaa from this chassis (sb_readonly=0)
Dec 13 04:28:59 compute-0 nova_compute[243704]: 2025-12-13 04:28:59.839 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.841 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.842 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9fa2f1cf-1ba5-4445-a29c-f71162961188]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.842 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:28:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:28:59.843 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'env', 'PROCESS_TAG=haproxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2920aa7a-a9cb-45da-a971-38a7ffed2fca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:28:59 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1834: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 255 B/s wr, 1 op/s
Dec 13 04:29:00 compute-0 podman[278619]: 2025-12-13 04:29:00.168994303 +0000 UTC m=+0.021132916 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:29:00 compute-0 podman[278619]: 2025-12-13 04:29:00.620402876 +0000 UTC m=+0.472541509 container create 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:29:00 compute-0 systemd[1]: Started libpod-conmon-1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d.scope.
Dec 13 04:29:00 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfda2520433b2288a8da68de18fdf3a11e535aa82824ea949973312903454b5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:00 compute-0 podman[278619]: 2025-12-13 04:29:00.739619317 +0000 UTC m=+0.591757920 container init 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251202)
Dec 13 04:29:00 compute-0 podman[278619]: 2025-12-13 04:29:00.746883424 +0000 UTC m=+0.599022027 container start 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec 13 04:29:00 compute-0 podman[278632]: 2025-12-13 04:29:00.755333564 +0000 UTC m=+0.078754982 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:29:00 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [NOTICE]   (278657) : New worker (278659) forked
Dec 13 04:29:00 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [NOTICE]   (278657) : Loading success.
Dec 13 04:29:01 compute-0 ceph-mon[75071]: pgmap v1834: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 852 B/s rd, 255 B/s wr, 1 op/s
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.932 243708 DEBUG nova.compute.manager [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.932 243708 DEBUG oslo_concurrency.lockutils [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.933 243708 DEBUG oslo_concurrency.lockutils [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.934 243708 DEBUG oslo_concurrency.lockutils [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.934 243708 DEBUG nova.compute.manager [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] No waiting events found dispatching network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:29:01 compute-0 nova_compute[243704]: 2025-12-13 04:29:01.935 243708 WARNING nova.compute.manager [req-209252da-ed08-41f6-82c1-2308d92c5ae3 req-8b59a3d2-4a47-4f2d-a309-f5a01881a454 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received unexpected event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c for instance with vm_state building and task_state spawning.
Dec 13 04:29:01 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.245 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600142.244035, bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.246 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] VM Started (Lifecycle Event)
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.250 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.256 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.261 243708 INFO nova.virt.libvirt.driver [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Instance spawned successfully.
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.262 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.279 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.284 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.298 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.299 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.299 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.300 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.301 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.302 243708 DEBUG nova.virt.libvirt.driver [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.312 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.312 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600142.2444034, bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.313 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] VM Paused (Lifecycle Event)
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.346 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.350 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600142.2543445, bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.350 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] VM Resumed (Lifecycle Event)
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.376 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.382 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.389 243708 INFO nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Took 9.28 seconds to spawn the instance on the hypervisor.
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.390 243708 DEBUG nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.401 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.456 243708 INFO nova.compute.manager [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Took 11.53 seconds to build instance.
Dec 13 04:29:02 compute-0 nova_compute[243704]: 2025-12-13 04:29:02.470 243708 DEBUG oslo_concurrency.lockutils [None req-7cd83918-a2ac-4c31-8771-c6f73b0b8cba deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.604s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:02 compute-0 podman[278675]: 2025-12-13 04:29:02.952384749 +0000 UTC m=+0.097696228 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:29:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:03 compute-0 ceph-mon[75071]: pgmap v1835: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 341 B/s wr, 1 op/s
Dec 13 04:29:03 compute-0 nova_compute[243704]: 2025-12-13 04:29:03.708 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:03 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1836: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 12 KiB/s wr, 19 op/s
Dec 13 04:29:04 compute-0 nova_compute[243704]: 2025-12-13 04:29:04.002 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.081 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:05 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:05.080 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:29:05 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:05.082 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:29:05 compute-0 ceph-mon[75071]: pgmap v1836: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 12 KiB/s wr, 19 op/s
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.758 243708 DEBUG nova.compute.manager [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-changed-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.759 243708 DEBUG nova.compute.manager [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Refreshing instance network info cache due to event network-changed-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.759 243708 DEBUG oslo_concurrency.lockutils [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.760 243708 DEBUG oslo_concurrency.lockutils [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:29:05 compute-0 nova_compute[243704]: 2025-12-13 04:29:05.761 243708 DEBUG nova.network.neutron [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Refreshing network info cache for port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:29:05 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1837: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:06 compute-0 sudo[278694]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:29:06 compute-0 sudo[278694]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:06 compute-0 sudo[278694]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:06 compute-0 sudo[278719]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Dec 13 04:29:06 compute-0 sudo[278719]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:06 compute-0 nova_compute[243704]: 2025-12-13 04:29:06.710 243708 DEBUG nova.network.neutron [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updated VIF entry in instance network info cache for port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:29:06 compute-0 nova_compute[243704]: 2025-12-13 04:29:06.711 243708 DEBUG nova.network.neutron [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating instance_info_cache with network_info: [{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:29:06 compute-0 nova_compute[243704]: 2025-12-13 04:29:06.728 243708 DEBUG oslo_concurrency.lockutils [req-8f449997-57b6-47e0-a31d-5283f7454606 req-99f10867-ca98-480e-a8f3-12747e6a95ec 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:29:07 compute-0 podman[278789]: 2025-12-13 04:29:07.000471719 +0000 UTC m=+0.098820529 container exec 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:07 compute-0 podman[278789]: 2025-12-13 04:29:07.110359086 +0000 UTC m=+0.208707846 container exec_died 8aaf8457121f10f173c99d797b6d49d0fe2fb9196a26d4d7e55c88c89d4727fe (image=quay.io/ceph/ceph:v20, name=ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mon-compute-0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:29:07 compute-0 ceph-mon[75071]: pgmap v1837: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:07 compute-0 sudo[278719]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:29:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:29:07 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:07 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:08 compute-0 sudo[278977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:29:08 compute-0 sudo[278977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:08 compute-0 sudo[278977]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:08 compute-0 sudo[279002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:29:08 compute-0 sudo[279002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:08 compute-0 sudo[279002]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:29:08 compute-0 nova_compute[243704]: 2025-12-13 04:29:08.711 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:08 compute-0 sudo[279057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:29:08 compute-0 sudo[279057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:08 compute-0 sudo[279057]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:08 compute-0 sudo[279082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:29:08 compute-0 sudo[279082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:08 compute-0 ceph-mon[75071]: pgmap v1838: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:29:08 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:29:09 compute-0 nova_compute[243704]: 2025-12-13 04:29:09.003 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:09 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:09.085 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.103971488 +0000 UTC m=+0.039697180 container create 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:29:09 compute-0 systemd[1]: Started libpod-conmon-5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3.scope.
Dec 13 04:29:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.176704896 +0000 UTC m=+0.112430608 container init 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default)
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.086843493 +0000 UTC m=+0.022569195 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.183235664 +0000 UTC m=+0.118961356 container start 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:09 compute-0 sleepy_banzai[279135]: 167 167
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.188008323 +0000 UTC m=+0.123734035 container attach 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:29:09 compute-0 systemd[1]: libpod-5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3.scope: Deactivated successfully.
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.18936735 +0000 UTC m=+0.125093052 container died 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Dec 13 04:29:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b1490c269e25cf740e9c44d1c792fb4031a6baaf7e23172a805ab0c7df2f7ae4-merged.mount: Deactivated successfully.
Dec 13 04:29:09 compute-0 podman[279119]: 2025-12-13 04:29:09.229960274 +0000 UTC m=+0.165685966 container remove 5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_banzai, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:09 compute-0 systemd[1]: libpod-conmon-5da1038264aac9a2e3adc3dbb2154213e53a7b011a2dc2fc24f0fcf7255224e3.scope: Deactivated successfully.
Dec 13 04:29:09 compute-0 podman[279158]: 2025-12-13 04:29:09.400651475 +0000 UTC m=+0.047056790 container create a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:29:09 compute-0 systemd[1]: Started libpod-conmon-a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a.scope.
Dec 13 04:29:09 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:09 compute-0 podman[279158]: 2025-12-13 04:29:09.47696907 +0000 UTC m=+0.123374405 container init a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:09 compute-0 podman[279158]: 2025-12-13 04:29:09.383639382 +0000 UTC m=+0.030044697 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:09 compute-0 podman[279158]: 2025-12-13 04:29:09.483408185 +0000 UTC m=+0.129813500 container start a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:09 compute-0 podman[279158]: 2025-12-13 04:29:09.486483019 +0000 UTC m=+0.132888324 container attach a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:29:09 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1839: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:10 compute-0 zealous_lumiere[279175]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:29:10 compute-0 zealous_lumiere[279175]: --> All data devices are unavailable
Dec 13 04:29:10 compute-0 systemd[1]: libpod-a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a.scope: Deactivated successfully.
Dec 13 04:29:10 compute-0 podman[279158]: 2025-12-13 04:29:10.075757359 +0000 UTC m=+0.722162674 container died a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c0338ae0f1765b1fdceaea6a85fc581aedf3448333a0d7c17e43ef119c6053e-merged.mount: Deactivated successfully.
Dec 13 04:29:10 compute-0 podman[279158]: 2025-12-13 04:29:10.115977693 +0000 UTC m=+0.762382998 container remove a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_lumiere, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:10 compute-0 systemd[1]: libpod-conmon-a839eb166c68424d97413016c0fb0579ab1a6035764febbcea748ae34a10225a.scope: Deactivated successfully.
Dec 13 04:29:10 compute-0 sudo[279082]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:10 compute-0 sudo[279208]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:29:10 compute-0 sudo[279208]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:10 compute-0 sudo[279208]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:10 compute-0 sudo[279233]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:29:10 compute-0 sudo[279233]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.549602123 +0000 UTC m=+0.048915961 container create e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:29:10 compute-0 systemd[1]: Started libpod-conmon-e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00.scope.
Dec 13 04:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.5292944 +0000 UTC m=+0.028608258 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.628392735 +0000 UTC m=+0.127706613 container init e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.635150959 +0000 UTC m=+0.134464797 container start e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:29:10 compute-0 inspiring_chatterjee[279286]: 167 167
Dec 13 04:29:10 compute-0 systemd[1]: libpod-e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00.scope: Deactivated successfully.
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.638847369 +0000 UTC m=+0.138161307 container attach e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.642277913 +0000 UTC m=+0.141591751 container died e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-81b33feb0ccce597ce4bd64a30916183a6bbfb48a57417982a6452646f294001-merged.mount: Deactivated successfully.
Dec 13 04:29:10 compute-0 podman[279270]: 2025-12-13 04:29:10.680505282 +0000 UTC m=+0.179819120 container remove e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:10 compute-0 systemd[1]: libpod-conmon-e24b9cc4ebf1a39bc5d70fe5fffebdd3128fc405f1c33fa5853d75469cb12d00.scope: Deactivated successfully.
Dec 13 04:29:10 compute-0 podman[279310]: 2025-12-13 04:29:10.881774394 +0000 UTC m=+0.044946043 container create 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:29:10 compute-0 systemd[1]: Started libpod-conmon-794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d.scope.
Dec 13 04:29:10 compute-0 podman[279310]: 2025-12-13 04:29:10.865158702 +0000 UTC m=+0.028330371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:10 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58859d18eba8f6bf4baf9b7256ffac07fd10e47dd356919863d361e962b570/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58859d18eba8f6bf4baf9b7256ffac07fd10e47dd356919863d361e962b570/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58859d18eba8f6bf4baf9b7256ffac07fd10e47dd356919863d361e962b570/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d58859d18eba8f6bf4baf9b7256ffac07fd10e47dd356919863d361e962b570/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:10 compute-0 podman[279310]: 2025-12-13 04:29:10.998677962 +0000 UTC m=+0.161849651 container init 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:29:11 compute-0 podman[279310]: 2025-12-13 04:29:11.006191187 +0000 UTC m=+0.169362846 container start 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:11 compute-0 podman[279310]: 2025-12-13 04:29:11.00963877 +0000 UTC m=+0.172810459 container attach 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:11 compute-0 ceph-mon[75071]: pgmap v1839: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]: {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     "0": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "devices": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "/dev/loop3"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             ],
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_name": "ceph_lv0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_size": "21470642176",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "name": "ceph_lv0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "tags": {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_name": "ceph",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.crush_device_class": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.encrypted": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.objectstore": "bluestore",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_id": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.vdo": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.with_tpm": "0"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             },
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "vg_name": "ceph_vg0"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         }
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     ],
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     "1": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "devices": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "/dev/loop4"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             ],
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_name": "ceph_lv1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_size": "21470642176",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "name": "ceph_lv1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "tags": {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_name": "ceph",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.crush_device_class": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.encrypted": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.objectstore": "bluestore",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_id": "1",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.vdo": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.with_tpm": "0"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             },
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "vg_name": "ceph_vg1"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         }
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     ],
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     "2": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "devices": [
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "/dev/loop5"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             ],
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_name": "ceph_lv2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_size": "21470642176",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "name": "ceph_lv2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "tags": {
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.cluster_name": "ceph",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.crush_device_class": "",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.encrypted": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.objectstore": "bluestore",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osd_id": "2",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.vdo": "0",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:                 "ceph.with_tpm": "0"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             },
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "type": "block",
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:             "vg_name": "ceph_vg2"
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:         }
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]:     ]
Dec 13 04:29:11 compute-0 peaceful_joliot[279327]: }
Dec 13 04:29:11 compute-0 systemd[1]: libpod-794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d.scope: Deactivated successfully.
Dec 13 04:29:11 compute-0 podman[279310]: 2025-12-13 04:29:11.360206002 +0000 UTC m=+0.523377681 container died 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:29:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d58859d18eba8f6bf4baf9b7256ffac07fd10e47dd356919863d361e962b570-merged.mount: Deactivated successfully.
Dec 13 04:29:11 compute-0 podman[279310]: 2025-12-13 04:29:11.41424342 +0000 UTC m=+0.577415069 container remove 794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_joliot, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:29:11 compute-0 systemd[1]: libpod-conmon-794727cc01092f9914db168d7b2d31ec87b3f7ad82872db53feb3ad20028ac7d.scope: Deactivated successfully.
Dec 13 04:29:11 compute-0 sudo[279233]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:11 compute-0 sudo[279346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:29:11 compute-0 sudo[279346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:11 compute-0 sudo[279346]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:11 compute-0 sudo[279371]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:29:11 compute-0 sudo[279371]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:11 compute-0 podman[279410]: 2025-12-13 04:29:11.954774127 +0000 UTC m=+0.070070236 container create 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:29:11 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1840: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 13 04:29:12 compute-0 systemd[1]: Started libpod-conmon-2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1.scope.
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:11.924973606 +0000 UTC m=+0.040269775 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:12.075942811 +0000 UTC m=+0.191238890 container init 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:12.088655067 +0000 UTC m=+0.203951136 container start 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:12.091999307 +0000 UTC m=+0.207295617 container attach 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:29:12 compute-0 vigilant_mirzakhani[279426]: 167 167
Dec 13 04:29:12 compute-0 systemd[1]: libpod-2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1.scope: Deactivated successfully.
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:12.098189506 +0000 UTC m=+0.213485615 container died 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:29:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-24bbfb38d90849f5e7cc5afd21b3a01d5423519758e6012d1dd2a5303a81a7f1-merged.mount: Deactivated successfully.
Dec 13 04:29:12 compute-0 podman[279410]: 2025-12-13 04:29:12.152840702 +0000 UTC m=+0.268136811 container remove 2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:29:12 compute-0 systemd[1]: libpod-conmon-2af7bc1fd7f52885bd92f68964fac6d8884e3cf1b848891ad358e3dc00274fe1.scope: Deactivated successfully.
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:12 compute-0 podman[279449]: 2025-12-13 04:29:12.377641424 +0000 UTC m=+0.065923713 container create 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:29:12 compute-0 systemd[1]: Started libpod-conmon-6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b.scope.
Dec 13 04:29:12 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:29:12 compute-0 podman[279449]: 2025-12-13 04:29:12.355961304 +0000 UTC m=+0.044243613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575d08a6dbc4da7a670610ef86d02cfc293a15daf1a45215fcbce0c140503608/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575d08a6dbc4da7a670610ef86d02cfc293a15daf1a45215fcbce0c140503608/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575d08a6dbc4da7a670610ef86d02cfc293a15daf1a45215fcbce0c140503608/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/575d08a6dbc4da7a670610ef86d02cfc293a15daf1a45215fcbce0c140503608/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:29:12 compute-0 podman[279449]: 2025-12-13 04:29:12.473196781 +0000 UTC m=+0.161479100 container init 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True)
Dec 13 04:29:12 compute-0 podman[279449]: 2025-12-13 04:29:12.484805257 +0000 UTC m=+0.173087536 container start 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:29:12 compute-0 podman[279449]: 2025-12-13 04:29:12.488319343 +0000 UTC m=+0.176601732 container attach 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:29:13 compute-0 ceph-mon[75071]: pgmap v1840: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Dec 13 04:29:13 compute-0 lvm[279547]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:29:13 compute-0 lvm[279547]: VG ceph_vg2 finished
Dec 13 04:29:13 compute-0 lvm[279545]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:29:13 compute-0 lvm[279546]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:29:13 compute-0 lvm[279545]: VG ceph_vg0 finished
Dec 13 04:29:13 compute-0 lvm[279546]: VG ceph_vg1 finished
Dec 13 04:29:13 compute-0 bold_gould[279465]: {}
Dec 13 04:29:13 compute-0 systemd[1]: libpod-6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b.scope: Deactivated successfully.
Dec 13 04:29:13 compute-0 systemd[1]: libpod-6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b.scope: Consumed 1.554s CPU time.
Dec 13 04:29:13 compute-0 podman[279550]: 2025-12-13 04:29:13.494704095 +0000 UTC m=+0.046625109 container died 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:29:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-575d08a6dbc4da7a670610ef86d02cfc293a15daf1a45215fcbce0c140503608-merged.mount: Deactivated successfully.
Dec 13 04:29:13 compute-0 podman[279550]: 2025-12-13 04:29:13.555505408 +0000 UTC m=+0.107426372 container remove 6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_gould, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:29:13 compute-0 systemd[1]: libpod-conmon-6acecbc62deefd0a5eed1ce8352f9c2280075ef838e57c978a6f0c58e47bfd6b.scope: Deactivated successfully.
Dec 13 04:29:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:13 compute-0 sudo[279371]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:29:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:29:13 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:13 compute-0 nova_compute[243704]: 2025-12-13 04:29:13.753 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:13 compute-0 sudo[279565]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:29:13 compute-0 sudo[279565]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:29:13 compute-0 sudo[279565]: pam_unix(sudo:session): session closed for user root
Dec 13 04:29:13 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 94 op/s
Dec 13 04:29:14 compute-0 nova_compute[243704]: 2025-12-13 04:29:14.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:14 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:29:15 compute-0 ovn_controller[145204]: 2025-12-13T04:29:15Z|00064|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Dec 13 04:29:15 compute-0 ovn_controller[145204]: 2025-12-13T04:29:15Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b9:74:3b 10.100.0.6
Dec 13 04:29:15 compute-0 ceph-mon[75071]: pgmap v1841: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 13 KiB/s wr, 94 op/s
Dec 13 04:29:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1842: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 KiB/s wr, 90 op/s
Dec 13 04:29:17 compute-0 ceph-mon[75071]: pgmap v1842: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 KiB/s wr, 90 op/s
Dec 13 04:29:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 6.4 KiB/s wr, 36 op/s
Dec 13 04:29:18 compute-0 ovn_controller[145204]: 2025-12-13T04:29:18Z|00066|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.7 does not match offer 10.100.0.6
Dec 13 04:29:18 compute-0 ovn_controller[145204]: 2025-12-13T04:29:18Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b9:74:3b 10.100.0.6
Dec 13 04:29:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:18 compute-0 nova_compute[243704]: 2025-12-13 04:29:18.758 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:19 compute-0 nova_compute[243704]: 2025-12-13 04:29:19.008 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:19 compute-0 ceph-mon[75071]: pgmap v1843: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 457 KiB/s rd, 6.4 KiB/s wr, 36 op/s
Dec 13 04:29:19 compute-0 podman[279590]: 2025-12-13 04:29:19.959030868 +0000 UTC m=+0.099652800 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:29:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 10 KiB/s wr, 44 op/s
Dec 13 04:29:20 compute-0 ovn_controller[145204]: 2025-12-13T04:29:20Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b9:74:3b 10.100.0.6
Dec 13 04:29:20 compute-0 ovn_controller[145204]: 2025-12-13T04:29:20Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b9:74:3b 10.100.0.6
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.673546) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160673572, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 669, "num_deletes": 251, "total_data_size": 818753, "memory_usage": 830872, "flush_reason": "Manual Compaction"}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160679288, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 796408, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36286, "largest_seqno": 36954, "table_properties": {"data_size": 792851, "index_size": 1401, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8129, "raw_average_key_size": 19, "raw_value_size": 785759, "raw_average_value_size": 1875, "num_data_blocks": 62, "num_entries": 419, "num_filter_entries": 419, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765600113, "oldest_key_time": 1765600113, "file_creation_time": 1765600160, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 5776 microseconds, and 2495 cpu microseconds.
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.679319) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 796408 bytes OK
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.679339) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.680854) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.680867) EVENT_LOG_v1 {"time_micros": 1765600160680863, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.680881) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 815211, prev total WAL file size 815211, number of live WAL files 2.
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.681320) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(777KB)], [74(11MB)]
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160681360, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 12858856, "oldest_snapshot_seqno": -1}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6780 keys, 11040707 bytes, temperature: kUnknown
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160751856, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 11040707, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10989376, "index_size": 33299, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16965, "raw_key_size": 171836, "raw_average_key_size": 25, "raw_value_size": 10861604, "raw_average_value_size": 1602, "num_data_blocks": 1326, "num_entries": 6780, "num_filter_entries": 6780, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765600160, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.752186) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 11040707 bytes
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.754217) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 182.2 rd, 156.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 11.5 +0.0 blob) out(10.5 +0.0 blob), read-write-amplify(30.0) write-amplify(13.9) OK, records in: 7294, records dropped: 514 output_compression: NoCompression
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.754232) EVENT_LOG_v1 {"time_micros": 1765600160754225, "job": 42, "event": "compaction_finished", "compaction_time_micros": 70579, "compaction_time_cpu_micros": 24017, "output_level": 6, "num_output_files": 1, "total_output_size": 11040707, "num_input_records": 7294, "num_output_records": 6780, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160754491, "job": 42, "event": "table_file_deletion", "file_number": 76}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600160756762, "job": 42, "event": "table_file_deletion", "file_number": 74}
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.681238) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.756825) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.756830) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.756832) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.756834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:20 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:29:20.756835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:29:21 compute-0 ceph-mon[75071]: pgmap v1844: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 10 KiB/s wr, 44 op/s
Dec 13 04:29:21 compute-0 nova_compute[243704]: 2025-12-13 04:29:21.887 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1845: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 10 KiB/s wr, 44 op/s
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.914 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.915 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.915 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.916 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:29:22 compute-0 nova_compute[243704]: 2025-12-13 04:29:22.917 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:29:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:29:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/381533832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.478 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.561 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.562 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:29:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:23 compute-0 ceph-mon[75071]: pgmap v1845: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 10 KiB/s wr, 44 op/s
Dec 13 04:29:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/381533832' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.811 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.859 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.861 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4115MB free_disk=59.98791391029954GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.862 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.862 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.945 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.945 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.946 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:29:23 compute-0 nova_compute[243704]: 2025-12-13 04:29:23.994 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:29:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1846: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 20 KiB/s wr, 45 op/s
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.018 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:29:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743172891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.560 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.567 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.586 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.612 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:29:24 compute-0 nova_compute[243704]: 2025-12-13 04:29:24.612 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.750s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3743172891' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:25 compute-0 ceph-mon[75071]: pgmap v1846: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 604 KiB/s rd, 20 KiB/s wr, 45 op/s
Dec 13 04:29:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 19 KiB/s wr, 23 op/s
Dec 13 04:29:26 compute-0 nova_compute[243704]: 2025-12-13 04:29:26.613 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:26 compute-0 nova_compute[243704]: 2025-12-13 04:29:26.614 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:29:26 compute-0 nova_compute[243704]: 2025-12-13 04:29:26.615 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:29:26 compute-0 ceph-mon[75071]: pgmap v1847: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 386 KiB/s rd, 19 KiB/s wr, 23 op/s
Dec 13 04:29:27 compute-0 nova_compute[243704]: 2025-12-13 04:29:27.608 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:29:27 compute-0 nova_compute[243704]: 2025-12-13 04:29:27.609 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:29:27 compute-0 nova_compute[243704]: 2025-12-13 04:29:27.609 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:29:27 compute-0 nova_compute[243704]: 2025-12-13 04:29:27.609 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:29:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1848: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 14 KiB/s wr, 8 op/s
Dec 13 04:29:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:28 compute-0 nova_compute[243704]: 2025-12-13 04:29:28.816 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.016 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.032 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating instance_info_cache with network_info: [{"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.047 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.048 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.048 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.049 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.049 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:29 compute-0 ceph-mon[75071]: pgmap v1848: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 14 KiB/s wr, 8 op/s
Dec 13 04:29:29 compute-0 nova_compute[243704]: 2025-12-13 04:29:29.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1849: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 23 KiB/s wr, 9 op/s
Dec 13 04:29:30 compute-0 nova_compute[243704]: 2025-12-13 04:29:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:30 compute-0 podman[279662]: 2025-12-13 04:29:30.94331229 +0000 UTC m=+0.073560241 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:29:31 compute-0 ceph-mon[75071]: pgmap v1849: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 146 KiB/s rd, 23 KiB/s wr, 9 op/s
Dec 13 04:29:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Dec 13 04:29:33 compute-0 ceph-mon[75071]: pgmap v1850: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s wr, 1 op/s
Dec 13 04:29:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:33 compute-0 nova_compute[243704]: 2025-12-13 04:29:33.822 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:33 compute-0 nova_compute[243704]: 2025-12-13 04:29:33.875 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:29:33 compute-0 nova_compute[243704]: 2025-12-13 04:29:33.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:29:33 compute-0 podman[279681]: 2025-12-13 04:29:33.933596991 +0000 UTC m=+0.077407856 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:29:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1851: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Dec 13 04:29:34 compute-0 nova_compute[243704]: 2025-12-13 04:29:34.019 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:35 compute-0 ceph-mon[75071]: pgmap v1851: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s wr, 1 op/s
Dec 13 04:29:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:35.106 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:35.106 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:35.107 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:35 compute-0 ovn_controller[145204]: 2025-12-13T04:29:35Z|00254|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Dec 13 04:29:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1852: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:29:37 compute-0 ceph-mon[75071]: pgmap v1852: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.660 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.661 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.661 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.662 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.662 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.664 243708 INFO nova.compute.manager [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Terminating instance
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.667 243708 DEBUG nova.compute.manager [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:29:37 compute-0 kernel: tapf08ebcd8-fc (unregistering): left promiscuous mode
Dec 13 04:29:37 compute-0 NetworkManager[48899]: <info>  [1765600177.7223] device (tapf08ebcd8-fc): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:29:37 compute-0 ovn_controller[145204]: 2025-12-13T04:29:37Z|00255|binding|INFO|Releasing lport f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c from this chassis (sb_readonly=0)
Dec 13 04:29:37 compute-0 ovn_controller[145204]: 2025-12-13T04:29:37Z|00256|binding|INFO|Setting lport f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c down in Southbound
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.735 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 ovn_controller[145204]: 2025-12-13T04:29:37Z|00257|binding|INFO|Removing iface tapf08ebcd8-fc ovn-installed in OVS
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.743 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:37.753 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b9:74:3b 10.100.0.6'], port_security=['fa:16:3e:b9:74:3b 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0350955e-df3f-494b-94c3-1eba35bfaee3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:29:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:37.755 154842 INFO neutron.agent.ovn.metadata.agent [-] Port f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca unbound from our chassis
Dec 13 04:29:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:37.757 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:29:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:37.758 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2a93be8e-8e40-4d1b-b4e6-dfae7cdd5dcd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:37 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:37.759 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace which is not needed anymore
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.762 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Dec 13 04:29:37 compute-0 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d0000001b.scope: Consumed 16.209s CPU time.
Dec 13 04:29:37 compute-0 systemd-machined[206767]: Machine qemu-27-instance-0000001b terminated.
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.926 243708 INFO nova.virt.libvirt.driver [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Instance destroyed successfully.
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.927 243708 DEBUG nova.objects.instance [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'resources' on Instance uuid bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:29:37 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [NOTICE]   (278657) : haproxy version is 2.8.14-c23fe91
Dec 13 04:29:37 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [NOTICE]   (278657) : path to executable is /usr/sbin/haproxy
Dec 13 04:29:37 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [WARNING]  (278657) : Exiting Master process...
Dec 13 04:29:37 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [ALERT]    (278657) : Current worker (278659) exited with code 143 (Terminated)
Dec 13 04:29:37 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[278635]: [WARNING]  (278657) : All workers exited. Exiting... (0)
Dec 13 04:29:37 compute-0 systemd[1]: libpod-1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d.scope: Deactivated successfully.
Dec 13 04:29:37 compute-0 podman[279725]: 2025-12-13 04:29:37.941377285 +0000 UTC m=+0.065209603 container died 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.951 243708 DEBUG nova.virt.libvirt.vif [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:28:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-252007343',display_name='tempest-TransferEncryptedVolumeTest-server-252007343',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-252007343',id=27,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCTMOWY+zxHhuuYbmrEsaJgRE3PSdxqJ15zYyzPDCeLErvpORjNdez33Bk3TG/Gt9LpNKoYFaHiFvQPNsdImPfafvTHH9jNUqYZKtS8UFNsxrTUJ+ntIWYll6LMTTOCBjw==',key_name='tempest-TransferEncryptedVolumeTest-1635619248',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:29:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-i09f1p1b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:29:02Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.951 243708 DEBUG nova.network.os_vif_util [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "address": "fa:16:3e:b9:74:3b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf08ebcd8-fc", "ovs_interfaceid": "f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.952 243708 DEBUG nova.network.os_vif_util [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.953 243708 DEBUG os_vif [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.955 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.955 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf08ebcd8-fc, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.957 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.961 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:37 compute-0 nova_compute[243704]: 2025-12-13 04:29:37.965 243708 INFO os_vif [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b9:74:3b,bridge_name='br-int',has_traffic_filtering=True,id=f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf08ebcd8-fc')
Dec 13 04:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d-userdata-shm.mount: Deactivated successfully.
Dec 13 04:29:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-cfda2520433b2288a8da68de18fdf3a11e535aa82824ea949973312903454b5f-merged.mount: Deactivated successfully.
Dec 13 04:29:38 compute-0 podman[279725]: 2025-12-13 04:29:38.000834413 +0000 UTC m=+0.124666761 container cleanup 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:29:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:29:38 compute-0 systemd[1]: libpod-conmon-1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d.scope: Deactivated successfully.
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.074 243708 DEBUG nova.compute.manager [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-unplugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.075 243708 DEBUG oslo_concurrency.lockutils [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.075 243708 DEBUG oslo_concurrency.lockutils [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.076 243708 DEBUG oslo_concurrency.lockutils [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.076 243708 DEBUG nova.compute.manager [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] No waiting events found dispatching network-vif-unplugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.076 243708 DEBUG nova.compute.manager [req-ab4fe140-4a25-4b16-95b0-d144186a9429 req-78401009-863d-4e1b-8357-1b12bced5835 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-unplugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:29:38 compute-0 podman[279779]: 2025-12-13 04:29:38.112469808 +0000 UTC m=+0.074528328 container remove 1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.123 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[84899b28-edf5-4379-b69a-be7df4791414]: (4, ('Sat Dec 13 04:29:37 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d)\n1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d\nSat Dec 13 04:29:38 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d)\n1bbb7857f42e925a412f14c960f3c37f0ddf87c2bbbbf715b29a4398620f4f3d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.126 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[8e29a141-b1ab-4440-af8b-7f90c43457a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.129 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.131 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:38 compute-0 kernel: tap2920aa7a-a0: left promiscuous mode
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.134 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.138 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ecf2bb3d-a9d4-401c-af9d-ed7d1fb7facf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.150 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.159 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6e89a02b-667e-4d94-b697-48b3272795e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.161 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[99ad077c-7cf3-4673-955f-697008fc5f22]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.174 243708 INFO nova.virt.libvirt.driver [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Deleting instance files /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_del
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.175 243708 INFO nova.virt.libvirt.driver [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Deletion of /var/lib/nova/instances/bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4_del complete
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.183 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b142fcc6-4b2d-4ae4-b807-1c92da00b879]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 487827, 'reachable_time': 20421, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279798, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.188 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:29:38 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:29:38.188 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[fa3f0989-0753-4a27-a850-ee1598e4fe6c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:29:38 compute-0 systemd[1]: run-netns-ovnmeta\x2d2920aa7a\x2da9cb\x2d45da\x2da971\x2d38a7ffed2fca.mount: Deactivated successfully.
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.244 243708 INFO nova.compute.manager [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Took 0.58 seconds to destroy the instance on the hypervisor.
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.244 243708 DEBUG oslo.service.loopingcall [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.245 243708 DEBUG nova.compute.manager [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:29:38 compute-0 nova_compute[243704]: 2025-12-13 04:29:38.245 243708 DEBUG nova.network.neutron [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:29:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.020 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.054 243708 DEBUG nova.network.neutron [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.090 243708 INFO nova.compute.manager [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Took 0.84 seconds to deallocate network for instance.
Dec 13 04:29:39 compute-0 ceph-mon[75071]: pgmap v1853: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.145 243708 DEBUG nova.compute.manager [req-706f0160-7f33-4b2b-94c3-e595c3ee7001 req-fa5c5d55-afa6-41b2-92f1-8d8a6ba14b89 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-deleted-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.589 243708 INFO nova.compute.manager [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Took 0.50 seconds to detach 1 volumes for instance.
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.636 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.636 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.670 243708 DEBUG nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.697 243708 DEBUG nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.698 243708 DEBUG nova.compute.provider_tree [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.711 243708 DEBUG nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.730 243708 DEBUG nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:29:39 compute-0 nova_compute[243704]: 2025-12-13 04:29:39.780 243708 DEBUG oslo_concurrency.processutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1854: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 15 KiB/s wr, 18 op/s
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.173 243708 DEBUG nova.compute.manager [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.174 243708 DEBUG oslo_concurrency.lockutils [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.175 243708 DEBUG oslo_concurrency.lockutils [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.175 243708 DEBUG oslo_concurrency.lockutils [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.176 243708 DEBUG nova.compute.manager [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] No waiting events found dispatching network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.177 243708 WARNING nova.compute.manager [req-35cbed37-8996-4f90-8afd-bad1ea335519 req-c6959eff-0ab7-4e89-9c95-269dd2029f98 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Received unexpected event network-vif-plugged-f08ebcd8-fc01-4e6d-8c3b-3b625fa6a90c for instance with vm_state deleted and task_state None.
Dec 13 04:29:40 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:29:40 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4280738554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.373 243708 DEBUG oslo_concurrency.processutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.383 243708 DEBUG nova.compute.provider_tree [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.411 243708 DEBUG nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.440 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.481 243708 INFO nova.scheduler.client.report [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Deleted allocations for instance bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4
Dec 13 04:29:40 compute-0 nova_compute[243704]: 2025-12-13 04:29:40.590 243708 DEBUG oslo_concurrency.lockutils [None req-3eeac8db-9696-4db9-ac9d-a826d13a5ba6 deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:29:40
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes', '.mgr', 'vms', 'backups', 'images', 'default.rgw.log']
Dec 13 04:29:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:29:41 compute-0 ceph-mon[75071]: pgmap v1854: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 15 KiB/s wr, 18 op/s
Dec 13 04:29:41 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4280738554' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1855: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 5.6 KiB/s wr, 19 op/s
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:29:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:29:42 compute-0 nova_compute[243704]: 2025-12-13 04:29:42.958 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:29:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:29:43 compute-0 ceph-mon[75071]: pgmap v1855: 305 pgs: 305 active+clean; 453 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 5.6 KiB/s wr, 19 op/s
Dec 13 04:29:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:29:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302910758' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:29:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:29:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2302910758' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:29:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 370 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 221 KiB/s rd, 3.6 KiB/s wr, 21 op/s
Dec 13 04:29:44 compute-0 nova_compute[243704]: 2025-12-13 04:29:44.024 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2302910758' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:29:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2302910758' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:29:45 compute-0 ceph-mon[75071]: pgmap v1856: 305 pgs: 305 active+clean; 370 MiB data, 677 MiB used, 59 GiB / 60 GiB avail; 221 KiB/s rd, 3.6 KiB/s wr, 21 op/s
Dec 13 04:29:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:29:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4037618881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:29:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:29:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4037618881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:29:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1857: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4037618881' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:29:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4037618881' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:29:47 compute-0 ceph-mon[75071]: pgmap v1857: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:47 compute-0 nova_compute[243704]: 2025-12-13 04:29:47.962 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:49 compute-0 nova_compute[243704]: 2025-12-13 04:29:49.027 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:49 compute-0 ceph-mon[75071]: pgmap v1858: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:51 compute-0 podman[279821]: 2025-12-13 04:29:51.015717735 +0000 UTC m=+0.145014094 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Dec 13 04:29:51 compute-0 ceph-mon[75071]: pgmap v1859: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1860: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 19 op/s
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.7219504666699478e-06 of space, bias 1.0, pg target 0.0011165851400009843 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029103102345263854 of space, bias 1.0, pg target 0.8730930703579156 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4744175277961344e-06 of space, bias 1.0, pg target 0.0007423252583388403 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.20001079449781242 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3115272986482904e-06 of space, bias 4.0, pg target 0.0015738327583779486 quantized to 16 (current 16)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:29:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:29:52 compute-0 nova_compute[243704]: 2025-12-13 04:29:52.924 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765600177.9228945, bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:29:52 compute-0 nova_compute[243704]: 2025-12-13 04:29:52.925 243708 INFO nova.compute.manager [-] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] VM Stopped (Lifecycle Event)
Dec 13 04:29:52 compute-0 nova_compute[243704]: 2025-12-13 04:29:52.955 243708 DEBUG nova.compute.manager [None req-f2245e4f-eb6a-44ca-b118-7da75cd8b0f6 - - - - - -] [instance: bfee9bb0-77a7-4ae0-862e-7d1f8fe10bb4] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:29:52 compute-0 nova_compute[243704]: 2025-12-13 04:29:52.964 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:53 compute-0 ceph-mon[75071]: pgmap v1860: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 19 op/s
Dec 13 04:29:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 682 B/s wr, 25 op/s
Dec 13 04:29:54 compute-0 nova_compute[243704]: 2025-12-13 04:29:54.029 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:55 compute-0 ceph-mon[75071]: pgmap v1861: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 682 B/s wr, 25 op/s
Dec 13 04:29:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 22 op/s
Dec 13 04:29:57 compute-0 ceph-mon[75071]: pgmap v1862: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 682 B/s wr, 22 op/s
Dec 13 04:29:57 compute-0 nova_compute[243704]: 2025-12-13 04:29:57.967 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:29:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:29:59 compute-0 nova_compute[243704]: 2025-12-13 04:29:59.031 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:29:59 compute-0 ceph-mon[75071]: pgmap v1863: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Dec 13 04:30:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:01 compute-0 ceph-mon[75071]: pgmap v1864: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:01 compute-0 podman[279847]: 2025-12-13 04:30:01.977004361 +0000 UTC m=+0.098726795 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec 13 04:30:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:02 compute-0 nova_compute[243704]: 2025-12-13 04:30:02.971 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:03 compute-0 ceph-mon[75071]: pgmap v1865: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:04 compute-0 nova_compute[243704]: 2025-12-13 04:30:04.035 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:04 compute-0 podman[279866]: 2025-12-13 04:30:04.975538336 +0000 UTC m=+0.113064606 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:30:05 compute-0 ceph-mon[75071]: pgmap v1866: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:30:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Dec 13 04:30:07 compute-0 ceph-mon[75071]: pgmap v1867: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 686 KiB/s rd, 22 KiB/s wr, 5 op/s
Dec 13 04:30:07 compute-0 nova_compute[243704]: 2025-12-13 04:30:07.974 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:30:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:09 compute-0 ovn_controller[145204]: 2025-12-13T04:30:09Z|00258|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Dec 13 04:30:09 compute-0 nova_compute[243704]: 2025-12-13 04:30:09.037 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:09 compute-0 ceph-mon[75071]: pgmap v1868: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:30:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:30:11 compute-0 ceph-mon[75071]: pgmap v1869: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 3.0 KiB/s rd, 22 KiB/s wr, 4 op/s
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:12 compute-0 nova_compute[243704]: 2025-12-13 04:30:12.977 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:13 compute-0 ceph-mon[75071]: pgmap v1870: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 04:30:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:13 compute-0 sudo[279886]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:30:13 compute-0 sudo[279886]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:13 compute-0 sudo[279886]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:13 compute-0 sudo[279911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:30:13 compute-0 sudo[279911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 04:30:14 compute-0 nova_compute[243704]: 2025-12-13 04:30:14.039 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:14 compute-0 sudo[279911]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:30:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:30:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:14 compute-0 sudo[279968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:30:14 compute-0 sudo[279968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:14 compute-0 sudo[279968]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:14 compute-0 sudo[279993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:30:14 compute-0 sudo[279993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:15 compute-0 ceph-mon[75071]: pgmap v1871: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:30:15 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.371179044 +0000 UTC m=+0.053765833 container create 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Dec 13 04:30:15 compute-0 systemd[1]: Started libpod-conmon-7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6.scope.
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.34564616 +0000 UTC m=+0.028232949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.484722461 +0000 UTC m=+0.167309260 container init 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.494139738 +0000 UTC m=+0.176726517 container start 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.498357982 +0000 UTC m=+0.180944771 container attach 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:15 compute-0 mystifying_swartz[280047]: 167 167
Dec 13 04:30:15 compute-0 systemd[1]: libpod-7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6.scope: Deactivated successfully.
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.502508195 +0000 UTC m=+0.185094974 container died 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:30:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3c0cbe359ef90e99defcd8f4c0df27263bd8c52c967e16d9418407266afc5af-merged.mount: Deactivated successfully.
Dec 13 04:30:15 compute-0 podman[280031]: 2025-12-13 04:30:15.569200208 +0000 UTC m=+0.251786957 container remove 7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_swartz, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:15 compute-0 systemd[1]: libpod-conmon-7cb16b5ec6a1d085404a512ca731fc66000d76b661e49d56076d6b9805074bd6.scope: Deactivated successfully.
Dec 13 04:30:15 compute-0 podman[280071]: 2025-12-13 04:30:15.739323873 +0000 UTC m=+0.026884152 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:15 compute-0 podman[280071]: 2025-12-13 04:30:15.838750397 +0000 UTC m=+0.126310696 container create d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:30:15 compute-0 systemd[1]: Started libpod-conmon-d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f.scope.
Dec 13 04:30:15 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:15 compute-0 podman[280071]: 2025-12-13 04:30:15.956612411 +0000 UTC m=+0.244172680 container init d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:15 compute-0 podman[280071]: 2025-12-13 04:30:15.974439745 +0000 UTC m=+0.261999994 container start d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:30:15 compute-0 podman[280071]: 2025-12-13 04:30:15.978505535 +0000 UTC m=+0.266065794 container attach d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:30:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:16 compute-0 sweet_bell[280087]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:30:16 compute-0 sweet_bell[280087]: --> All data devices are unavailable
Dec 13 04:30:16 compute-0 systemd[1]: libpod-d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f.scope: Deactivated successfully.
Dec 13 04:30:16 compute-0 podman[280071]: 2025-12-13 04:30:16.5001915 +0000 UTC m=+0.787751799 container died d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:30:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bf75da2ca0971e6ee4342f0598e2f44c9d7bb261810fec4a660276f524c69e9-merged.mount: Deactivated successfully.
Dec 13 04:30:16 compute-0 podman[280071]: 2025-12-13 04:30:16.566306997 +0000 UTC m=+0.853867296 container remove d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_bell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:30:16 compute-0 systemd[1]: libpod-conmon-d3e84aa23bb33be8007be2256d9f09b6daa01391c81816c55105a9c656db989f.scope: Deactivated successfully.
Dec 13 04:30:16 compute-0 sudo[279993]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:16 compute-0 sudo[280121]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:30:16 compute-0 sudo[280121]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:16 compute-0 sudo[280121]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:16 compute-0 sudo[280146]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:30:16 compute-0 sudo[280146]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.069744165 +0000 UTC m=+0.042950739 container create aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:30:17 compute-0 systemd[1]: Started libpod-conmon-aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7.scope.
Dec 13 04:30:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.048016504 +0000 UTC m=+0.021223098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.158779256 +0000 UTC m=+0.131985910 container init aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.165729434 +0000 UTC m=+0.138935998 container start aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:30:17 compute-0 bold_shirley[280199]: 167 167
Dec 13 04:30:17 compute-0 systemd[1]: libpod-aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7.scope: Deactivated successfully.
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.170161085 +0000 UTC m=+0.143367689 container attach aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.170810172 +0000 UTC m=+0.144016736 container died aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Dec 13 04:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-166cdecce6fcd6ef87c0df91088f3d17c19fff652206a89e091ebed5777d2d53-merged.mount: Deactivated successfully.
Dec 13 04:30:17 compute-0 podman[280183]: 2025-12-13 04:30:17.221620004 +0000 UTC m=+0.194826598 container remove aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_shirley, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:30:17 compute-0 systemd[1]: libpod-conmon-aa09e0f5cdfefd58f17120cf2a10833b78599b92d59d903c830b69701d431fc7.scope: Deactivated successfully.
Dec 13 04:30:17 compute-0 ceph-mon[75071]: pgmap v1872: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.408073154 +0000 UTC m=+0.054478973 container create a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:30:17 compute-0 systemd[1]: Started libpod-conmon-a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55.scope.
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.380271428 +0000 UTC m=+0.026677297 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:17 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe93c00e4ac61fdceb85cee1b7bafb5aee863123bdd0b791c442a7e1a8f8c8d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe93c00e4ac61fdceb85cee1b7bafb5aee863123bdd0b791c442a7e1a8f8c8d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe93c00e4ac61fdceb85cee1b7bafb5aee863123bdd0b791c442a7e1a8f8c8d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbe93c00e4ac61fdceb85cee1b7bafb5aee863123bdd0b791c442a7e1a8f8c8d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.511525836 +0000 UTC m=+0.157931705 container init a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.522443063 +0000 UTC m=+0.168848852 container start a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.525917088 +0000 UTC m=+0.172322987 container attach a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Dec 13 04:30:17 compute-0 determined_wozniak[280240]: {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     "0": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "devices": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "/dev/loop3"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             ],
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_name": "ceph_lv0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_size": "21470642176",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "name": "ceph_lv0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "tags": {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_name": "ceph",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.crush_device_class": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.encrypted": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.objectstore": "bluestore",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_id": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.vdo": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.with_tpm": "0"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             },
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "vg_name": "ceph_vg0"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         }
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     ],
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     "1": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "devices": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "/dev/loop4"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             ],
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_name": "ceph_lv1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_size": "21470642176",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "name": "ceph_lv1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "tags": {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_name": "ceph",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.crush_device_class": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.encrypted": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.objectstore": "bluestore",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_id": "1",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.vdo": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.with_tpm": "0"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             },
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "vg_name": "ceph_vg1"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         }
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     ],
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     "2": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "devices": [
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "/dev/loop5"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             ],
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_name": "ceph_lv2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_size": "21470642176",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "name": "ceph_lv2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "tags": {
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.cluster_name": "ceph",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.crush_device_class": "",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.encrypted": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.objectstore": "bluestore",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osd_id": "2",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.vdo": "0",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:                 "ceph.with_tpm": "0"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             },
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "type": "block",
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:             "vg_name": "ceph_vg2"
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:         }
Dec 13 04:30:17 compute-0 determined_wozniak[280240]:     ]
Dec 13 04:30:17 compute-0 determined_wozniak[280240]: }
Dec 13 04:30:17 compute-0 systemd[1]: libpod-a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55.scope: Deactivated successfully.
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.844745326 +0000 UTC m=+0.491151115 container died a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbe93c00e4ac61fdceb85cee1b7bafb5aee863123bdd0b791c442a7e1a8f8c8d-merged.mount: Deactivated successfully.
Dec 13 04:30:17 compute-0 podman[280223]: 2025-12-13 04:30:17.892206136 +0000 UTC m=+0.538611925 container remove a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:30:17 compute-0 systemd[1]: libpod-conmon-a15ed54040d1b16c11045f630045e0259fe9e181ed9510869c3bcbe643311f55.scope: Deactivated successfully.
Dec 13 04:30:17 compute-0 sudo[280146]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:17 compute-0 nova_compute[243704]: 2025-12-13 04:30:17.979 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:17 compute-0 sudo[280260]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:30:18 compute-0 sudo[280260]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:18 compute-0 sudo[280260]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:18 compute-0 sudo[280285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:30:18 compute-0 sudo[280285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.349976542 +0000 UTC m=+0.048359426 container create 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:30:18 compute-0 systemd[1]: Started libpod-conmon-7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b.scope.
Dec 13 04:30:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.413375126 +0000 UTC m=+0.111757970 container init 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.419812651 +0000 UTC m=+0.118195505 container start 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.327896942 +0000 UTC m=+0.026279866 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.423431199 +0000 UTC m=+0.121814063 container attach 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:30:18 compute-0 gracious_margulis[280338]: 167 167
Dec 13 04:30:18 compute-0 systemd[1]: libpod-7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b.scope: Deactivated successfully.
Dec 13 04:30:18 compute-0 conmon[280338]: conmon 7f12bc703d0a6350c954 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b.scope/container/memory.events
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.426214755 +0000 UTC m=+0.124597659 container died 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-48d7e030b3a36c3451247293080eb633e4b5fae1ab4e813da1b7be2bce431569-merged.mount: Deactivated successfully.
Dec 13 04:30:18 compute-0 podman[280322]: 2025-12-13 04:30:18.460593839 +0000 UTC m=+0.158976683 container remove 7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_margulis, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:18 compute-0 systemd[1]: libpod-conmon-7f12bc703d0a6350c95494c39bc1eca00d6163f4a86baa16a363c1fb30cbaf3b.scope: Deactivated successfully.
Dec 13 04:30:18 compute-0 podman[280361]: 2025-12-13 04:30:18.61069835 +0000 UTC m=+0.039971517 container create 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:18 compute-0 systemd[1]: Started libpod-conmon-2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8.scope.
Dec 13 04:30:18 compute-0 podman[280361]: 2025-12-13 04:30:18.592433164 +0000 UTC m=+0.021706351 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:30:18 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f909ce2687283c4c1aad34775922f946e1f06cdd47af9af84f9b9996c1c2026/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f909ce2687283c4c1aad34775922f946e1f06cdd47af9af84f9b9996c1c2026/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f909ce2687283c4c1aad34775922f946e1f06cdd47af9af84f9b9996c1c2026/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f909ce2687283c4c1aad34775922f946e1f06cdd47af9af84f9b9996c1c2026/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:18 compute-0 podman[280361]: 2025-12-13 04:30:18.733621762 +0000 UTC m=+0.162894939 container init 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Dec 13 04:30:18 compute-0 podman[280361]: 2025-12-13 04:30:18.741861647 +0000 UTC m=+0.171134834 container start 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:30:18 compute-0 podman[280361]: 2025-12-13 04:30:18.745248868 +0000 UTC m=+0.174522045 container attach 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.065 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.349 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.350 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:19 compute-0 ceph-mon[75071]: pgmap v1873: 305 pgs: 305 active+clean; 271 MiB data, 599 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.368 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.444 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.445 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.455 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.456 243708 INFO nova.compute.claims [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:30:19 compute-0 lvm[280456]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:30:19 compute-0 lvm[280456]: VG ceph_vg1 finished
Dec 13 04:30:19 compute-0 lvm[280455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:30:19 compute-0 lvm[280455]: VG ceph_vg0 finished
Dec 13 04:30:19 compute-0 lvm[280458]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:30:19 compute-0 lvm[280458]: VG ceph_vg2 finished
Dec 13 04:30:19 compute-0 nova_compute[243704]: 2025-12-13 04:30:19.597 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:19 compute-0 jovial_pare[280377]: {}
Dec 13 04:30:19 compute-0 systemd[1]: libpod-2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8.scope: Deactivated successfully.
Dec 13 04:30:19 compute-0 systemd[1]: libpod-2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8.scope: Consumed 1.484s CPU time.
Dec 13 04:30:19 compute-0 podman[280361]: 2025-12-13 04:30:19.668211053 +0000 UTC m=+1.097484230 container died 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:30:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-9f909ce2687283c4c1aad34775922f946e1f06cdd47af9af84f9b9996c1c2026-merged.mount: Deactivated successfully.
Dec 13 04:30:19 compute-0 podman[280361]: 2025-12-13 04:30:19.716427253 +0000 UTC m=+1.145700420 container remove 2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_pare, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Dec 13 04:30:19 compute-0 systemd[1]: libpod-conmon-2135911397ae3a4f1089ca67c88953a43e07ccb1ee1ebfd4fd252eaa5e3164c8.scope: Deactivated successfully.
Dec 13 04:30:19 compute-0 sudo[280285]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:30:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:30:19 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:19 compute-0 sudo[280492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:30:19 compute-0 sudo[280492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:30:19 compute-0 sudo[280492]: pam_unix(sudo:session): session closed for user root
Dec 13 04:30:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639160129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.118 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.127 243708 DEBUG nova.compute.provider_tree [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.142 243708 DEBUG nova.scheduler.client.report [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.161 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.163 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.210 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.211 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.227 243708 INFO nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.248 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.286 243708 INFO nova.virt.block_device [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Booting with volume 7c5d52ba-5662-409c-ad98-9e14ce995974 at /dev/vda
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.441 243708 DEBUG os_brick.utils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.445 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.457 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.012s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.457 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b36d6cca-6c38-4a6c-86de-9443e288a73f]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.459 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.468 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.468 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[1b335521-13aa-4408-a65e-e5e64f501a84]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.470 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.477 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.477 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[9e106eef-75a9-47b4-b0c1-2f37698fef50]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.478 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cb39dfb0-4e81-46bf-b984-135935984620]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.478 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.503 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "nvme version" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.507 243708 DEBUG os_brick.initiator.connectors.lightos [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.507 243708 DEBUG os_brick.initiator.connectors.lightos [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.507 243708 DEBUG os_brick.initiator.connectors.lightos [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.508 243708 DEBUG os_brick.utils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] <== get_connector_properties: return (66ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.508 243708 DEBUG nova.virt.block_device [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updating existing volume attachment record: 0eefeebf-ca71-4594-a504-28bd4445553a _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:30:20 compute-0 nova_compute[243704]: 2025-12-13 04:30:20.553 243708 DEBUG nova.policy [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'deba56fa45214f28a3aab4d031dc4155', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43c4864e9f844459a882a9e3d0fe477b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:30:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:20 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:30:20 compute-0 ceph-mon[75071]: pgmap v1874: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:20 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/639160129' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:21 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:30:21 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2202546735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:30:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:21.429 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:30:21 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:21.430 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.466 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.557 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Successfully created port: 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.677 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.678 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.679 243708 INFO nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Creating image(s)
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.679 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.679 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Ensure instance console log exists: /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.680 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.680 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:21 compute-0 nova_compute[243704]: 2025-12-13 04:30:21.680 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:21 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2202546735' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:30:22 compute-0 podman[280527]: 2025-12-13 04:30:22.032292737 +0000 UTC m=+0.167319469 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Dec 13 04:30:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.274 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Successfully updated port: 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.286 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.286 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquired lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.286 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.368 243708 DEBUG nova.compute.manager [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-changed-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.369 243708 DEBUG nova.compute.manager [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Refreshing instance network info cache due to event network-changed-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.369 243708 DEBUG oslo_concurrency.lockutils [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:30:22 compute-0 ceph-mon[75071]: pgmap v1875: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.875 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.899 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.899 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.899 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.900 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.934 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:30:22 compute-0 nova_compute[243704]: 2025-12-13 04:30:22.982 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:23 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2178633164' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.440 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.685 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.686 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4220MB free_disk=59.98805799335241GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.687 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.687 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.754 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 9b71e21a-179e-43a0-99ca-714940bc664f actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.754 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.754 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:30:23 compute-0 nova_compute[243704]: 2025-12-13 04:30:23.794 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:23 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2178633164' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.066 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.312 243708 DEBUG nova.network.neutron [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updating instance_info_cache with network_info: [{"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.342 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Releasing lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.342 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Instance network_info: |[{"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.343 243708 DEBUG oslo_concurrency.lockutils [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.343 243708 DEBUG nova.network.neutron [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Refreshing network info cache for port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.347 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Start _get_guest_xml network_info=[{"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9b71e21a-179e-43a0-99ca-714940bc664f', 'attached_at': '', 'detached_at': '', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'serial': '7c5d52ba-5662-409c-ad98-9e14ce995974'}, 'disk_bus': 'virtio', 'attachment_id': '0eefeebf-ca71-4594-a504-28bd4445553a', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.353 243708 WARNING nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.359 243708 DEBUG nova.virt.libvirt.host [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.360 243708 DEBUG nova.virt.libvirt.host [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.363 243708 DEBUG nova.virt.libvirt.host [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.363 243708 DEBUG nova.virt.libvirt.host [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.364 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.364 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.364 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.365 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.365 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.365 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.365 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.366 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.366 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.366 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.366 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.366 243708 DEBUG nova.virt.hardware [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:30:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3919704697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.397 243708 DEBUG nova.storage.rbd_utils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 9b71e21a-179e-43a0-99ca-714940bc664f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.401 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.430 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.636s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.436 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.447 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.466 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.466 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:24 compute-0 ceph-mon[75071]: pgmap v1876: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3919704697' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:30:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893872401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:30:24 compute-0 nova_compute[243704]: 2025-12-13 04:30:24.954 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.136 243708 DEBUG os_brick.encryptors [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Using volume encryption metadata '{'encryption_key_id': '3e25cc20-d913-417b-be4e-839b27a4502b', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '9b71e21a-179e-43a0-99ca-714940bc664f', 'attached_at': '', 'detached_at': '', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.140 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.161 243708 DEBUG barbicanclient.v1.secrets [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/3e25cc20-d913-417b-be4e-839b27a4502b get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.161 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.195 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.196 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.216 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.216 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.247 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.248 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.282 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.283 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.305 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.306 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.329 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.330 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.357 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.358 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.390 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.391 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.423 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.424 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.443 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.444 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.469 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.470 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.507 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.508 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.537 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.538 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.576 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.578 243708 INFO barbicanclient.base [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/3e25cc20-d913-417b-be4e-839b27a4502b
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.600 243708 DEBUG barbicanclient.client [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.602 243708 DEBUG nova.virt.libvirt.host [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <volume>7c5d52ba-5662-409c-ad98-9e14ce995974</volume>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:30:25 compute-0 nova_compute[243704]: </secret>
Dec 13 04:30:25 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.634 243708 DEBUG nova.virt.libvirt.vif [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1367087886',display_name='tempest-TransferEncryptedVolumeTest-server-1367087886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1367087886',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-3fi7tfb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:30:20Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=9b71e21a-179e-43a0-99ca-714940bc664f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.635 243708 DEBUG nova.network.os_vif_util [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.637 243708 DEBUG nova.network.os_vif_util [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.641 243708 DEBUG nova.objects.instance [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b71e21a-179e-43a0-99ca-714940bc664f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.654 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <uuid>9b71e21a-179e-43a0-99ca-714940bc664f</uuid>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <name>instance-0000001c</name>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-1367087886</nova:name>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:30:24</nova:creationTime>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:user uuid="deba56fa45214f28a3aab4d031dc4155">tempest-TransferEncryptedVolumeTest-1412293480-project-member</nova:user>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:project uuid="43c4864e9f844459a882a9e3d0fe477b">tempest-TransferEncryptedVolumeTest-1412293480</nova:project>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <nova:port uuid="251e3d9e-516a-4092-8b16-bcd7a5cb8ae6">
Dec 13 04:30:25 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <system>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="serial">9b71e21a-179e-43a0-99ca-714940bc664f</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="uuid">9b71e21a-179e-43a0-99ca-714940bc664f</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </system>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <os>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </os>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <features>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </features>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/9b71e21a-179e-43a0-99ca-714940bc664f_disk.config">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </source>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </source>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <serial>7c5d52ba-5662-409c-ad98-9e14ce995974</serial>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:30:25 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="9793b8fe-1064-4677-b3e0-f441e6ec58af"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:77:39:9b"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <target dev="tap251e3d9e-51"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/console.log" append="off"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <video>
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </video>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:30:25 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:30:25 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:30:25 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:30:25 compute-0 nova_compute[243704]: </domain>
Dec 13 04:30:25 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.655 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Preparing to wait for external event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.656 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.656 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.657 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.658 243708 DEBUG nova.virt.libvirt.vif [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1367087886',display_name='tempest-TransferEncryptedVolumeTest-server-1367087886',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1367087886',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-3fi7tfb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:30:20Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=9b71e21a-179e-43a0-99ca-714940bc664f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.658 243708 DEBUG nova.network.os_vif_util [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.659 243708 DEBUG nova.network.os_vif_util [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.660 243708 DEBUG os_vif [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.662 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.663 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.663 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.668 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.669 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap251e3d9e-51, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.669 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap251e3d9e-51, col_values=(('external_ids', {'iface-id': '251e3d9e-516a-4092-8b16-bcd7a5cb8ae6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:77:39:9b', 'vm-uuid': '9b71e21a-179e-43a0-99ca-714940bc664f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.672 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.675 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:30:25 compute-0 NetworkManager[48899]: <info>  [1765600225.6777] manager: (tap251e3d9e-51): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/140)
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.683 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.685 243708 INFO os_vif [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51')
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.745 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.746 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.747 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No VIF found with MAC fa:16:3e:77:39:9b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.748 243708 INFO nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Using config drive
Dec 13 04:30:25 compute-0 nova_compute[243704]: 2025-12-13 04:30:25.783 243708 DEBUG nova.storage.rbd_utils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 9b71e21a-179e-43a0-99ca-714940bc664f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:30:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3893872401' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:30:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.139 243708 DEBUG nova.network.neutron [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updated VIF entry in instance network info cache for port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.140 243708 DEBUG nova.network.neutron [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updating instance_info_cache with network_info: [{"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.143 243708 INFO nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Creating config drive at /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.147 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjeix_d_n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.170 243708 DEBUG oslo_concurrency.lockutils [req-2ddae142-ea32-4f74-92a8-6daffadec4af req-8c02ed69-b415-4c72-80e1-42f648d34cdc 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.277 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjeix_d_n" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.304 243708 DEBUG nova.storage.rbd_utils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 9b71e21a-179e-43a0-99ca-714940bc664f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.308 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config 9b71e21a-179e-43a0-99ca-714940bc664f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.467 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.469 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.469 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.488 243708 DEBUG oslo_concurrency.processutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config 9b71e21a-179e-43a0-99ca-714940bc664f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.489 243708 INFO nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Deleting local config drive /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f/disk.config because it was imported into RBD.
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.504 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.505 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.507 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:26 compute-0 kernel: tap251e3d9e-51: entered promiscuous mode
Dec 13 04:30:26 compute-0 NetworkManager[48899]: <info>  [1765600226.5792] manager: (tap251e3d9e-51): new Tun device (/org/freedesktop/NetworkManager/Devices/141)
Dec 13 04:30:26 compute-0 ovn_controller[145204]: 2025-12-13T04:30:26Z|00259|binding|INFO|Claiming lport 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 for this chassis.
Dec 13 04:30:26 compute-0 ovn_controller[145204]: 2025-12-13T04:30:26Z|00260|binding|INFO|251e3d9e-516a-4092-8b16-bcd7a5cb8ae6: Claiming fa:16:3e:77:39:9b 10.100.0.14
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.577 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.603 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:39:9b 10.100.0.14'], port_security=['fa:16:3e:77:39:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9b71e21a-179e-43a0-99ca-714940bc664f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c965583c-d5da-4d08-bde9-d6826733374f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.605 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca bound to our chassis
Dec 13 04:30:26 compute-0 ovn_controller[145204]: 2025-12-13T04:30:26Z|00261|binding|INFO|Setting lport 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 ovn-installed in OVS
Dec 13 04:30:26 compute-0 ovn_controller[145204]: 2025-12-13T04:30:26Z|00262|binding|INFO|Setting lport 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 up in Southbound
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.608 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.611 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.616 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:26 compute-0 systemd-machined[206767]: New machine qemu-28-instance-0000001c.
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.632 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bd5bac0b-8564-4d1b-98ae-09f307412293]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 systemd-udevd[280714]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.635 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2920aa7a-a1 in ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.640 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2920aa7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.641 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[00a5ea40-d4cf-4d73-aa96-e11577cbfe0b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.642 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d89af693-0c5f-489f-a1bc-952c69c3033e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 systemd[1]: Started Virtual Machine qemu-28-instance-0000001c.
Dec 13 04:30:26 compute-0 NetworkManager[48899]: <info>  [1765600226.6630] device (tap251e3d9e-51): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:30:26 compute-0 NetworkManager[48899]: <info>  [1765600226.6646] device (tap251e3d9e-51): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.664 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[b342097b-3d10-458f-b30d-840c2da4b960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.696 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6a56ebaa-5fa0-404f-9117-3e4d92f67730]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.743 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[492f64a9-e7f8-4b97-bf79-493e17ee939a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.751 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[982e8077-aa55-4d08-969b-2c7338f59ef9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 NetworkManager[48899]: <info>  [1765600226.7545] manager: (tap2920aa7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/142)
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.827 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[8101e935-f10f-470b-a1e8-f1074ba81684]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.830 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a40924-a6df-4701-a68a-94595291c10e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ceph-mon[75071]: pgmap v1877: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:26 compute-0 NetworkManager[48899]: <info>  [1765600226.8651] device (tap2920aa7a-a0): carrier: link connected
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.871 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[30d6a3db-cb44-494c-99cd-404430f769f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 nova_compute[243704]: 2025-12-13 04:30:26.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.887 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f85e0cd9-0075-467e-99e4-01a11d40ffa4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496554, 'reachable_time': 29191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 280746, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.901 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c4fd092c-c721-44ea-96cd-ad1962771b1d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:807b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 496554, 'tstamp': 496554}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 280747, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.920 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[56f90374-5409-4fad-8f3e-b837308088ac]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496554, 'reachable_time': 29191, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 280748, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:26 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:26.956 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[fdfb624a-57d7-4f34-9bf0-9f4e8a660164]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.046 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3616d334-e211-49cd-aeb4-e8de3b478104]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.049 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.050 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.051 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2920aa7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.055 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:27 compute-0 NetworkManager[48899]: <info>  [1765600227.0567] manager: (tap2920aa7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Dec 13 04:30:27 compute-0 kernel: tap2920aa7a-a0: entered promiscuous mode
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.059 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.066 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2920aa7a-a0, col_values=(('external_ids', {'iface-id': 'ccd83819-bc00-4ecd-ab1d-315a75379aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:27 compute-0 ovn_controller[145204]: 2025-12-13T04:30:27Z|00263|binding|INFO|Releasing lport ccd83819-bc00-4ecd-ab1d-315a75379aaa from this chassis (sb_readonly=0)
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.070 243708 DEBUG nova.compute.manager [req-9f1e19e7-98c7-4746-9916-53cd4eb64f71 req-10a7cccb-43e4-4a48-9911-086b0c5ad8de 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.072 243708 DEBUG oslo_concurrency.lockutils [req-9f1e19e7-98c7-4746-9916-53cd4eb64f71 req-10a7cccb-43e4-4a48-9911-086b0c5ad8de 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.072 243708 DEBUG oslo_concurrency.lockutils [req-9f1e19e7-98c7-4746-9916-53cd4eb64f71 req-10a7cccb-43e4-4a48-9911-086b0c5ad8de 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.073 243708 DEBUG oslo_concurrency.lockutils [req-9f1e19e7-98c7-4746-9916-53cd4eb64f71 req-10a7cccb-43e4-4a48-9911-086b0c5ad8de 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.074 243708 DEBUG nova.compute.manager [req-9f1e19e7-98c7-4746-9916-53cd4eb64f71 req-10a7cccb-43e4-4a48-9911-086b0c5ad8de 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Processing event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.075 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.075 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.076 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[bf737e8d-c894-496a-8cab-ddab48b39325]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.077 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:30:27 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:27.079 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'env', 'PROCESS_TAG=haproxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2920aa7a-a9cb-45da-a971-38a7ffed2fca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:30:27 compute-0 nova_compute[243704]: 2025-12-13 04:30:27.095 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:27 compute-0 podman[280780]: 2025-12-13 04:30:27.541425111 +0000 UTC m=+0.079736469 container create 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 04:30:27 compute-0 systemd[1]: Started libpod-conmon-147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b.scope.
Dec 13 04:30:27 compute-0 podman[280780]: 2025-12-13 04:30:27.500343643 +0000 UTC m=+0.038655061 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:30:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:30:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f81cd10d1eccdbf1c4fb550c6d29f3e94efb00701a30b653a8205fc03471699/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:30:27 compute-0 podman[280780]: 2025-12-13 04:30:27.646432135 +0000 UTC m=+0.184743553 container init 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec 13 04:30:27 compute-0 podman[280780]: 2025-12-13 04:30:27.657317831 +0000 UTC m=+0.195629199 container start 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec 13 04:30:27 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [NOTICE]   (280835) : New worker (280837) forked
Dec 13 04:30:27 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [NOTICE]   (280835) : Loading success.
Dec 13 04:30:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:28 compute-0 nova_compute[243704]: 2025-12-13 04:30:28.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.102 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:29 compute-0 ceph-mon[75071]: pgmap v1878: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 9.4 MiB/s wr, 44 op/s
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.182 243708 DEBUG nova.compute.manager [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.182 243708 DEBUG oslo_concurrency.lockutils [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.183 243708 DEBUG oslo_concurrency.lockutils [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.183 243708 DEBUG oslo_concurrency.lockutils [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.184 243708 DEBUG nova.compute.manager [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] No waiting events found dispatching network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.184 243708 WARNING nova.compute.manager [req-622c6d7b-86db-4b4c-87a5-f168986b832d req-86613867-00ac-4d10-a10b-282b8942c933 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received unexpected event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 for instance with vm_state building and task_state spawning.
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.932 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600229.9318726, 9b71e21a-179e-43a0-99ca-714940bc664f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.933 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] VM Started (Lifecycle Event)
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.937 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.944 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.949 243708 INFO nova.virt.libvirt.driver [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Instance spawned successfully.
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.949 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.965 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.977 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.984 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.985 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.986 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.987 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.987 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.988 243708 DEBUG nova.virt.libvirt.driver [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.994 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.995 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600229.9322224, 9b71e21a-179e-43a0-99ca-714940bc664f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:30:29 compute-0 nova_compute[243704]: 2025-12-13 04:30:29.995 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] VM Paused (Lifecycle Event)
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.015 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.023 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600229.9424136, 9b71e21a-179e-43a0-99ca-714940bc664f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.024 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] VM Resumed (Lifecycle Event)
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.038 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:30:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 9.4 MiB/s wr, 54 op/s
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.043 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.058 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.318 243708 INFO nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Took 8.64 seconds to spawn the instance on the hypervisor.
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.319 243708 DEBUG nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.419 243708 INFO nova.compute.manager [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Took 11.00 seconds to build instance.
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.437 243708 DEBUG oslo_concurrency.lockutils [None req-ec1a8b6a-517a-4e62-9619-5faa477298dc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.674 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:30 compute-0 nova_compute[243704]: 2025-12-13 04:30:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:31 compute-0 ceph-mon[75071]: pgmap v1879: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 9.4 MiB/s wr, 54 op/s
Dec 13 04:30:31 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:31.434 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 04:30:32 compute-0 podman[280853]: 2025-12-13 04:30:32.963432884 +0000 UTC m=+0.086501641 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent)
Dec 13 04:30:33 compute-0 ceph-mon[75071]: pgmap v1880: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 7.2 KiB/s rd, 12 KiB/s wr, 9 op/s
Dec 13 04:30:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:33 compute-0 nova_compute[243704]: 2025-12-13 04:30:33.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:33 compute-0 nova_compute[243704]: 2025-12-13 04:30:33.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:30:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 04:30:34 compute-0 nova_compute[243704]: 2025-12-13 04:30:34.105 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:35.107 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:35.107 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:35.108 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:35 compute-0 ceph-mon[75071]: pgmap v1881: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 71 op/s
Dec 13 04:30:35 compute-0 nova_compute[243704]: 2025-12-13 04:30:35.678 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:35 compute-0 podman[280873]: 2025-12-13 04:30:35.949929243 +0000 UTC m=+0.081027494 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:30:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:36 compute-0 nova_compute[243704]: 2025-12-13 04:30:36.873 243708 DEBUG nova.compute.manager [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-changed-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:36 compute-0 nova_compute[243704]: 2025-12-13 04:30:36.874 243708 DEBUG nova.compute.manager [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Refreshing instance network info cache due to event network-changed-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:30:36 compute-0 nova_compute[243704]: 2025-12-13 04:30:36.875 243708 DEBUG oslo_concurrency.lockutils [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:30:36 compute-0 nova_compute[243704]: 2025-12-13 04:30:36.876 243708 DEBUG oslo_concurrency.lockutils [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:30:36 compute-0 nova_compute[243704]: 2025-12-13 04:30:36.876 243708 DEBUG nova.network.neutron [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Refreshing network info cache for port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:30:37 compute-0 ceph-mon[75071]: pgmap v1882: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:37 compute-0 nova_compute[243704]: 2025-12-13 04:30:37.872 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:30:38 compute-0 nova_compute[243704]: 2025-12-13 04:30:38.039 243708 DEBUG nova.network.neutron [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updated VIF entry in instance network info cache for port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:30:38 compute-0 nova_compute[243704]: 2025-12-13 04:30:38.040 243708 DEBUG nova.network.neutron [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updating instance_info_cache with network_info: [{"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:30:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:38 compute-0 nova_compute[243704]: 2025-12-13 04:30:38.070 243708 DEBUG oslo_concurrency.lockutils [req-c9449caf-c8ac-4936-b600-d56438349aba req-45c9034e-9cbf-4c89-a3b7-e8d9177fc600 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-9b71e21a-179e-43a0-99ca-714940bc664f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:30:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:39 compute-0 nova_compute[243704]: 2025-12-13 04:30:39.148 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:39 compute-0 ceph-mon[75071]: pgmap v1883: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:30:40
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.control', 'images', 'volumes', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'vms', '.mgr', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta']
Dec 13 04:30:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:30:40 compute-0 nova_compute[243704]: 2025-12-13 04:30:40.681 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:40 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 04:30:41 compute-0 ceph-mon[75071]: pgmap v1884: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:30:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:30:42 compute-0 ovn_controller[145204]: 2025-12-13T04:30:42Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:77:39:9b 10.100.0.14
Dec 13 04:30:42 compute-0 ovn_controller[145204]: 2025-12-13T04:30:42Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:77:39:9b 10.100.0.14
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:30:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:30:43 compute-0 ceph-mon[75071]: pgmap v1885: 305 pgs: 305 active+clean; 385 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 63 op/s
Dec 13 04:30:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 443 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 139 op/s
Dec 13 04:30:44 compute-0 nova_compute[243704]: 2025-12-13 04:30:44.150 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:45 compute-0 ceph-mon[75071]: pgmap v1886: 305 pgs: 305 active+clean; 443 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 139 op/s
Dec 13 04:30:45 compute-0 nova_compute[243704]: 2025-12-13 04:30:45.685 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:30:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1744957391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:30:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:30:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1744957391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:30:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 453 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 5.8 MiB/s wr, 81 op/s
Dec 13 04:30:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1744957391' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:30:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1744957391' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:30:47 compute-0 ceph-mon[75071]: pgmap v1887: 305 pgs: 305 active+clean; 453 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 682 KiB/s rd, 5.8 MiB/s wr, 81 op/s
Dec 13 04:30:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 453 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 591 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Dec 13 04:30:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:49 compute-0 nova_compute[243704]: 2025-12-13 04:30:49.201 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:49 compute-0 ceph-mon[75071]: pgmap v1888: 305 pgs: 305 active+clean; 453 MiB data, 790 MiB used, 59 GiB / 60 GiB avail; 591 KiB/s rd, 5.8 MiB/s wr, 78 op/s
Dec 13 04:30:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 599 KiB/s rd, 5.8 MiB/s wr, 94 op/s
Dec 13 04:30:50 compute-0 nova_compute[243704]: 2025-12-13 04:30:50.689 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:51 compute-0 ceph-mon[75071]: pgmap v1889: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 599 KiB/s rd, 5.8 MiB/s wr, 94 op/s
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 599 KiB/s rd, 5.8 MiB/s wr, 94 op/s
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.506 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.507 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.507 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.508 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.509 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.511 243708 INFO nova.compute.manager [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Terminating instance
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.513 243708 DEBUG nova.compute.manager [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:30:52 compute-0 kernel: tap251e3d9e-51 (unregistering): left promiscuous mode
Dec 13 04:30:52 compute-0 NetworkManager[48899]: <info>  [1765600252.5632] device (tap251e3d9e-51): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.576 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 ovn_controller[145204]: 2025-12-13T04:30:52Z|00264|binding|INFO|Releasing lport 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 from this chassis (sb_readonly=0)
Dec 13 04:30:52 compute-0 ovn_controller[145204]: 2025-12-13T04:30:52Z|00265|binding|INFO|Setting lport 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 down in Southbound
Dec 13 04:30:52 compute-0 ovn_controller[145204]: 2025-12-13T04:30:52Z|00266|binding|INFO|Removing iface tap251e3d9e-51 ovn-installed in OVS
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.581 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.587 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:39:9b 10.100.0.14'], port_security=['fa:16:3e:77:39:9b 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '9b71e21a-179e-43a0-99ca-714940bc664f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c965583c-d5da-4d08-bde9-d6826733374f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.214'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.589 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca unbound from our chassis
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.591 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.593 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a21ed76c-ff4c-41c5-9347-bedc0e350601]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.594 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace which is not needed anymore
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.603 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Dec 13 04:30:52 compute-0 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d0000001c.scope: Consumed 16.364s CPU time.
Dec 13 04:30:52 compute-0 systemd-machined[206767]: Machine qemu-28-instance-0000001c terminated.
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.1511589756249185e-06 of space, bias 1.0, pg target 0.0018453476926874755 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005453248644058318 of space, bias 1.0, pg target 1.6359745932174954 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.473035796107651e-06 of space, bias 1.0, pg target 0.0007394377030361875 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.1993440918494864 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3115272986482904e-06 of space, bias 4.0, pg target 0.0015685866491833554 quantized to 16 (current 16)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:30:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.735 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.740 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [NOTICE]   (280835) : haproxy version is 2.8.14-c23fe91
Dec 13 04:30:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [NOTICE]   (280835) : path to executable is /usr/sbin/haproxy
Dec 13 04:30:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [WARNING]  (280835) : Exiting Master process...
Dec 13 04:30:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [ALERT]    (280835) : Current worker (280837) exited with code 143 (Terminated)
Dec 13 04:30:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[280795]: [WARNING]  (280835) : All workers exited. Exiting... (0)
Dec 13 04:30:52 compute-0 systemd[1]: libpod-147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b.scope: Deactivated successfully.
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.751 243708 INFO nova.virt.libvirt.driver [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Instance destroyed successfully.
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.752 243708 DEBUG nova.objects.instance [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'resources' on Instance uuid 9b71e21a-179e-43a0-99ca-714940bc664f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:30:52 compute-0 podman[280895]: 2025-12-13 04:30:52.754960262 +0000 UTC m=+0.155447377 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:30:52 compute-0 podman[280937]: 2025-12-13 04:30:52.75746096 +0000 UTC m=+0.058177612 container died 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2)
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.764 243708 DEBUG nova.virt.libvirt.vif [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:30:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-1367087886',display_name='tempest-TransferEncryptedVolumeTest-server-1367087886',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-1367087886',id=28,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:30:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-3fi7tfb8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:30:30Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=9b71e21a-179e-43a0-99ca-714940bc664f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.764 243708 DEBUG nova.network.os_vif_util [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "address": "fa:16:3e:77:39:9b", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap251e3d9e-51", "ovs_interfaceid": "251e3d9e-516a-4092-8b16-bcd7a5cb8ae6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.765 243708 DEBUG nova.network.os_vif_util [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.766 243708 DEBUG os_vif [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.768 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.768 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap251e3d9e-51, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.770 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.771 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.778 243708 INFO os_vif [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:77:39:9b,bridge_name='br-int',has_traffic_filtering=True,id=251e3d9e-516a-4092-8b16-bcd7a5cb8ae6,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap251e3d9e-51')
Dec 13 04:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b-userdata-shm.mount: Deactivated successfully.
Dec 13 04:30:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-4f81cd10d1eccdbf1c4fb550c6d29f3e94efb00701a30b653a8205fc03471699-merged.mount: Deactivated successfully.
Dec 13 04:30:52 compute-0 podman[280937]: 2025-12-13 04:30:52.823429954 +0000 UTC m=+0.124146616 container cleanup 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec 13 04:30:52 compute-0 systemd[1]: libpod-conmon-147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b.scope: Deactivated successfully.
Dec 13 04:30:52 compute-0 podman[280997]: 2025-12-13 04:30:52.90091684 +0000 UTC m=+0.050937436 container remove 147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.909 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[6d225633-75cd-40ba-896b-62528394baf7]: (4, ('Sat Dec 13 04:30:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b)\n147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b\nSat Dec 13 04:30:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b)\n147ec9b8a5e717068693eb69d069f866da6bad88d1bf17698a98d9d971aaaf0b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.912 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd32b5e-6888-457c-a7d3-b455a4648c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.913 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.916 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 kernel: tap2920aa7a-a0: left promiscuous mode
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.929 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.932 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[00b66226-d579-4e02-82a5-22dd5858f1bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.945 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[c159f58b-17cf-43de-b6c7-caafcac0ac1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.947 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[008c4b4a-c699-49ad-9bcc-e411fbc827f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.951 243708 INFO nova.virt.libvirt.driver [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Deleting instance files /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f_del
Dec 13 04:30:52 compute-0 nova_compute[243704]: 2025-12-13 04:30:52.952 243708 INFO nova.virt.libvirt.driver [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Deletion of /var/lib/nova/instances/9b71e21a-179e-43a0-99ca-714940bc664f_del complete
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.974 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[906d0456-73d8-42ba-9486-54240c8877f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 496541, 'reachable_time': 30199, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281013, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.978 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:30:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:30:52.979 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[aed3f9b5-7373-4d37-a963-62b8d62e80f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:30:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d2920aa7a\x2da9cb\x2d45da\x2da971\x2d38a7ffed2fca.mount: Deactivated successfully.
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.035 243708 INFO nova.compute.manager [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Took 0.52 seconds to destroy the instance on the hypervisor.
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.036 243708 DEBUG oslo.service.loopingcall [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.038 243708 DEBUG nova.compute.manager [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.038 243708 DEBUG nova.network.neutron [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.142 243708 DEBUG nova.compute.manager [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-unplugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.142 243708 DEBUG oslo_concurrency.lockutils [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.142 243708 DEBUG oslo_concurrency.lockutils [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.143 243708 DEBUG oslo_concurrency.lockutils [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.143 243708 DEBUG nova.compute.manager [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] No waiting events found dispatching network-vif-unplugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:30:53 compute-0 nova_compute[243704]: 2025-12-13 04:30:53.143 243708 DEBUG nova.compute.manager [req-32752c18-339e-4857-8d01-5d180bf2a9db req-b12e520c-ccdd-4440-bf3a-0fe0f141a3a9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-unplugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:30:53 compute-0 ceph-mon[75071]: pgmap v1890: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 599 KiB/s rd, 5.8 MiB/s wr, 94 op/s
Dec 13 04:30:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 5.8 MiB/s wr, 101 op/s
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.205 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.344 243708 DEBUG nova.network.neutron [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.358 243708 INFO nova.compute.manager [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Took 1.32 seconds to deallocate network for instance.
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.587 243708 INFO nova.compute.manager [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Took 0.23 seconds to detach 1 volumes for instance.
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.638 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.639 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:54 compute-0 nova_compute[243704]: 2025-12-13 04:30:54.694 243708 DEBUG oslo_concurrency.processutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.247 243708 DEBUG nova.compute.manager [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:30:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1435725818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.249 243708 DEBUG oslo_concurrency.lockutils [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.250 243708 DEBUG oslo_concurrency.lockutils [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.251 243708 DEBUG oslo_concurrency.lockutils [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.251 243708 DEBUG nova.compute.manager [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] No waiting events found dispatching network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.252 243708 WARNING nova.compute.manager [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received unexpected event network-vif-plugged-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 for instance with vm_state deleted and task_state None.
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.253 243708 DEBUG nova.compute.manager [req-2bf99f25-cf02-496a-8944-c5ef289bda19 req-16b37cc9-8b9f-4e8a-9692-8d36225aa1d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Received event network-vif-deleted-251e3d9e-516a-4092-8b16-bcd7a5cb8ae6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.272 243708 DEBUG oslo_concurrency.processutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.578s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.284 243708 DEBUG nova.compute.provider_tree [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.302 243708 DEBUG nova.scheduler.client.report [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.410 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:55 compute-0 ceph-mon[75071]: pgmap v1891: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 5.8 MiB/s wr, 101 op/s
Dec 13 04:30:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1435725818' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.449 243708 INFO nova.scheduler.client.report [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Deleted allocations for instance 9b71e21a-179e-43a0-99ca-714940bc664f
Dec 13 04:30:55 compute-0 nova_compute[243704]: 2025-12-13 04:30:55.519 243708 DEBUG oslo_concurrency.lockutils [None req-562d2fe7-9605-4868-8c06-76925a31afdc deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "9b71e21a-179e-43a0-99ca-714940bc664f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:30:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Dec 13 04:30:57 compute-0 ceph-mon[75071]: pgmap v1892: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Dec 13 04:30:57 compute-0 nova_compute[243704]: 2025-12-13 04:30:57.771 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 13 04:30:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:30:59 compute-0 nova_compute[243704]: 2025-12-13 04:30:59.208 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:30:59 compute-0 ceph-mon[75071]: pgmap v1893: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 13 04:31:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 13 04:31:01 compute-0 ceph-mon[75071]: pgmap v1894: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 15 KiB/s wr, 31 op/s
Dec 13 04:31:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.774 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.899 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.899 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.911 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.978 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.979 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.989 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:31:02 compute-0 nova_compute[243704]: 2025-12-13 04:31:02.990 243708 INFO nova.compute.claims [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.112 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:03 compute-0 ceph-mon[75071]: pgmap v1895: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Dec 13 04:31:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/254034990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.701 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.589s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.709 243708 DEBUG nova.compute.provider_tree [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.731 243708 DEBUG nova.scheduler.client.report [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.753 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.754 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.812 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.813 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.831 243708 INFO nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.852 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:31:03 compute-0 nova_compute[243704]: 2025-12-13 04:31:03.899 243708 INFO nova.virt.block_device [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Booting with volume 7c5d52ba-5662-409c-ad98-9e14ce995974 at /dev/vda
Dec 13 04:31:03 compute-0 podman[281058]: 2025-12-13 04:31:03.968168849 +0000 UTC m=+0.098778106 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.003 243708 DEBUG nova.policy [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'deba56fa45214f28a3aab4d031dc4155', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '43c4864e9f844459a882a9e3d0fe477b', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.037 243708 DEBUG os_brick.utils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.040 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.064 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.065 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4993f029-2745-444a-a74d-ed6fb99af708]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.067 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.082 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.082 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4b70e1-44c2-4ba5-bcff-06c8b3b1a895]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.085 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.100 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.100 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[ab55ce94-4c1f-4360-abd1-eb71956bdc26]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.102 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e82637a5-4fad-4154-bd67-02128fd8bff8]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.103 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.134 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "nvme version" returned: 0 in 0.031s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.141 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.141 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.141 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.142 243708 DEBUG os_brick.utils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] <== get_connector_properties: return (103ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.142 243708 DEBUG nova.virt.block_device [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating existing volume attachment record: 249f4499-2a6f-42a9-b3d6-9be1043d7df6 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:31:04 compute-0 nova_compute[243704]: 2025-12-13 04:31:04.211 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/254034990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:31:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4148176871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.230 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.232 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.232 243708 INFO nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Creating image(s)
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.233 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Did not create local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4859
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.233 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Ensure instance console log exists: /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.234 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.234 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:05 compute-0 nova_compute[243704]: 2025-12-13 04:31:05.235 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:05 compute-0 ceph-mon[75071]: pgmap v1896: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 15 KiB/s wr, 15 op/s
Dec 13 04:31:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/4148176871' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.043 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Successfully created port: 3cee184b-357f-406f-84a1-89d14094072c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:31:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.755 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Successfully updated port: 3cee184b-357f-406f-84a1-89d14094072c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.769 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.769 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquired lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.769 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.874 243708 DEBUG nova.compute.manager [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-changed-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.874 243708 DEBUG nova.compute.manager [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Refreshing instance network info cache due to event network-changed-3cee184b-357f-406f-84a1-89d14094072c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.874 243708 DEBUG oslo_concurrency.lockutils [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:31:06 compute-0 nova_compute[243704]: 2025-12-13 04:31:06.920 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:31:06 compute-0 podman[281086]: 2025-12-13 04:31:06.942943089 +0000 UTC m=+0.090145992 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:31:07 compute-0 ceph-mon[75071]: pgmap v1897: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 8 op/s
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.748 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765600252.7473001, 9b71e21a-179e-43a0-99ca-714940bc664f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.749 243708 INFO nova.compute.manager [-] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] VM Stopped (Lifecycle Event)
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.774 243708 DEBUG nova.compute.manager [None req-84f4af58-9e2d-470d-acf4-0c81d8548906 - - - - - -] [instance: 9b71e21a-179e-43a0-99ca-714940bc664f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.777 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.956 243708 DEBUG nova.network.neutron [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating instance_info_cache with network_info: [{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.983 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Releasing lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.984 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Instance network_info: |[{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.984 243708 DEBUG oslo_concurrency.lockutils [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.985 243708 DEBUG nova.network.neutron [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Refreshing network info cache for port 3cee184b-357f-406f-84a1-89d14094072c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.989 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Start _get_guest_xml network_info=[{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, '/dev/vda': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [], 'ephemerals': [], 'block_device_mapping': [{'boot_index': 0, 'guest_format': None, 'connection_info': {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '40983263-6826-4731-ac5e-96d549b1e08c', 'attached_at': '', 'detached_at': '', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'serial': '7c5d52ba-5662-409c-ad98-9e14ce995974'}, 'disk_bus': 'virtio', 'attachment_id': '249f4499-2a6f-42a9-b3d6-9be1043d7df6', 'device_type': 'disk', 'delete_on_termination': False, 'mount_device': '/dev/vda', 'volume_type': None}], ': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:31:07 compute-0 nova_compute[243704]: 2025-12-13 04:31:07.996 243708 WARNING nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.005 243708 DEBUG nova.virt.libvirt.host [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.006 243708 DEBUG nova.virt.libvirt.host [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.009 243708 DEBUG nova.virt.libvirt.host [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.009 243708 DEBUG nova.virt.libvirt.host [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.010 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.011 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format=<?>,created_at=<?>,direct_url=<?>,disk_format=<?>,id=<?>,min_disk=0,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.011 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.012 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.012 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.012 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.012 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.013 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.013 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.014 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.014 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.014 243708 DEBUG nova.virt.hardware [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.049 243708 DEBUG nova.storage.rbd_utils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 40983263-6826-4731-ac5e-96d549b1e08c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.054 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:31:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/131095140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.648 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.789 243708 DEBUG os_brick.encryptors [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Using volume encryption metadata '{'encryption_key_id': 'acbe256e-f31d-4027-a7ad-0a3c4480d4a4', 'control_location': 'front-end', 'cipher': 'aes-xts-plain64', 'key_size': 256, 'provider': 'luks'}' for connection: {'driver_volume_type': 'rbd', 'data': {'name': 'volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974', 'hosts': ['192.168.122.100'], 'ports': ['6789'], 'cluster_name': 'ceph', 'auth_enabled': True, 'auth_username': 'openstack', 'secret_type': 'ceph', 'secret_uuid': '***', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'discard': True, 'qos_specs': None, 'access_mode': 'rw', 'encrypted': True, 'cacheable': False}, 'status': 'reserved', 'instance': '40983263-6826-4731-ac5e-96d549b1e08c', 'attached_at': '', 'detached_at': '', 'volume_id': '7c5d52ba-5662-409c-ad98-9e14ce995974', 'serial': '} get_encryption_metadata /usr/lib/python3.9/site-packages/os_brick/encryptors/__init__.py:135
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.794 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Creating Client object Client /usr/lib/python3.9/site-packages/barbicanclient/client.py:163
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.812 243708 DEBUG barbicanclient.v1.secrets [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Getting secret - Secret href: https://barbican-internal.openstack.svc:9311/secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4 get /usr/lib/python3.9/site-packages/barbicanclient/v1/secrets.py:514
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.813 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.847 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.848 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.883 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.884 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.940 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.941 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.994 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:08 compute-0 nova_compute[243704]: 2025-12-13 04:31:08.995 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.031 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.031 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.069 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.069 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.135 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.136 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.184 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.185 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.213 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.214 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.216 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.245 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.246 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.267 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.267 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.283 243708 DEBUG nova.network.neutron [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updated VIF entry in instance network info cache for port 3cee184b-357f-406f-84a1-89d14094072c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.283 243708 DEBUG nova.network.neutron [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating instance_info_cache with network_info: [{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.298 243708 DEBUG oslo_concurrency.lockutils [req-a1548a39-c570-4269-bf55-0cac9061b89e req-14884d2e-49ff-4035-9776-d11897d6b7d2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.303 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.304 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.330 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.331 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.354 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.355 243708 INFO barbicanclient.base [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Calculated Secrets uuid ref: secrets/acbe256e-f31d-4027-a7ad-0a3c4480d4a4
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.387 243708 DEBUG barbicanclient.client [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Response status 200 _check_status_code /usr/lib/python3.9/site-packages/barbicanclient/client.py:87
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.388 243708 DEBUG nova.virt.libvirt.host [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Secret XML: <secret ephemeral="no" private="no">
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <usage type="volume">
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <volume>7c5d52ba-5662-409c-ad98-9e14ce995974</volume>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </usage>
Dec 13 04:31:09 compute-0 nova_compute[243704]: </secret>
Dec 13 04:31:09 compute-0 nova_compute[243704]:  create_secret /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1131
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.443 243708 DEBUG nova.virt.libvirt.vif [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-748393203',display_name='tempest-TransferEncryptedVolumeTest-server-748393203',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-748393203',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-zpbw71m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:31:03Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=40983263-6826-4731-ac5e-96d549b1e08c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.444 243708 DEBUG nova.network.os_vif_util [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.445 243708 DEBUG nova.network.os_vif_util [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.448 243708 DEBUG nova.objects.instance [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'pci_devices' on Instance uuid 40983263-6826-4731-ac5e-96d549b1e08c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.467 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <uuid>40983263-6826-4731-ac5e-96d549b1e08c</uuid>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <name>instance-0000001d</name>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:name>tempest-TransferEncryptedVolumeTest-server-748393203</nova:name>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:31:07</nova:creationTime>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:user uuid="deba56fa45214f28a3aab4d031dc4155">tempest-TransferEncryptedVolumeTest-1412293480-project-member</nova:user>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:project uuid="43c4864e9f844459a882a9e3d0fe477b">tempest-TransferEncryptedVolumeTest-1412293480</nova:project>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <nova:port uuid="3cee184b-357f-406f-84a1-89d14094072c">
Dec 13 04:31:09 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <system>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="serial">40983263-6826-4731-ac5e-96d549b1e08c</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="uuid">40983263-6826-4731-ac5e-96d549b1e08c</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </system>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <os>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </os>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <features>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </features>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/40983263-6826-4731-ac5e-96d549b1e08c_disk.config">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <source protocol="rbd" name="volumes/volume-7c5d52ba-5662-409c-ad98-9e14ce995974">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </source>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <serial>7c5d52ba-5662-409c-ad98-9e14ce995974</serial>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <encryption format="luks">
Dec 13 04:31:09 compute-0 nova_compute[243704]:         <secret type="passphrase" uuid="7222cf5d-1a90-4d32-aa8c-c11c52173b0b"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       </encryption>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:b4:c1:48"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <target dev="tap3cee184b-35"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/console.log" append="off"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <video>
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </video>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:31:09 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:31:09 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:31:09 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:31:09 compute-0 nova_compute[243704]: </domain>
Dec 13 04:31:09 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.468 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Preparing to wait for external event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.469 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.470 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.470 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.471 243708 DEBUG nova.virt.libvirt.vif [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-748393203',display_name='tempest-TransferEncryptedVolumeTest-server-748393203',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-748393203',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-zpbw71m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',network_allocated='True',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:31:03Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=40983263-6826-4731-ac5e-96d549b1e08c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.472 243708 DEBUG nova.network.os_vif_util [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.473 243708 DEBUG nova.network.os_vif_util [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.474 243708 DEBUG os_vif [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.476 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.476 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.477 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.482 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.483 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cee184b-35, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.483 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cee184b-35, col_values=(('external_ids', {'iface-id': '3cee184b-357f-406f-84a1-89d14094072c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:c1:48', 'vm-uuid': '40983263-6826-4731-ac5e-96d549b1e08c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.486 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:09 compute-0 NetworkManager[48899]: <info>  [1765600269.4874] manager: (tap3cee184b-35): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/144)
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.490 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.492 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.494 243708 INFO os_vif [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35')
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.550 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.551 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.551 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] No VIF found with MAC fa:16:3e:b4:c1:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.552 243708 INFO nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Using config drive
Dec 13 04:31:09 compute-0 nova_compute[243704]: 2025-12-13 04:31:09.584 243708 DEBUG nova.storage.rbd_utils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 40983263-6826-4731-ac5e-96d549b1e08c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:31:09 compute-0 ceph-mon[75071]: pgmap v1898: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:09 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/131095140' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:31:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.090 243708 INFO nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Creating config drive at /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.095 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp28_kn9ax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.226 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp28_kn9ax" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.249 243708 DEBUG nova.storage.rbd_utils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] rbd image 40983263-6826-4731-ac5e-96d549b1e08c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.253 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config 40983263-6826-4731-ac5e-96d549b1e08c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.380 243708 DEBUG oslo_concurrency.processutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config 40983263-6826-4731-ac5e-96d549b1e08c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.381 243708 INFO nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Deleting local config drive /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c/disk.config because it was imported into RBD.
Dec 13 04:31:10 compute-0 kernel: tap3cee184b-35: entered promiscuous mode
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.447 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.4488] manager: (tap3cee184b-35): new Tun device (/org/freedesktop/NetworkManager/Devices/145)
Dec 13 04:31:10 compute-0 ovn_controller[145204]: 2025-12-13T04:31:10Z|00267|binding|INFO|Claiming lport 3cee184b-357f-406f-84a1-89d14094072c for this chassis.
Dec 13 04:31:10 compute-0 ovn_controller[145204]: 2025-12-13T04:31:10Z|00268|binding|INFO|3cee184b-357f-406f-84a1-89d14094072c: Claiming fa:16:3e:b4:c1:48 10.100.0.4
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.461 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c1:48 10.100.0.4'], port_security=['fa:16:3e:b4:c1:48 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '40983263-6826-4731-ac5e-96d549b1e08c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c965583c-d5da-4d08-bde9-d6826733374f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=3cee184b-357f-406f-84a1-89d14094072c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.463 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 3cee184b-357f-406f-84a1-89d14094072c in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca bound to our chassis
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.465 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.466 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.470 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 ovn_controller[145204]: 2025-12-13T04:31:10Z|00269|binding|INFO|Setting lport 3cee184b-357f-406f-84a1-89d14094072c ovn-installed in OVS
Dec 13 04:31:10 compute-0 ovn_controller[145204]: 2025-12-13T04:31:10Z|00270|binding|INFO|Setting lport 3cee184b-357f-406f-84a1-89d14094072c up in Southbound
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.473 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.478 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[88a2a318-4322-41c6-bc58-d3e99e57b442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.479 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2920aa7a-a1 in ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.481 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2920aa7a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.481 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[2d1ec677-958c-4670-9b31-d096b7ab548f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.482 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[3648ea0a-b0d8-4630-87be-2cf0a69dcb83]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 systemd-machined[206767]: New machine qemu-29-instance-0000001d.
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.496 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[3e21de13-9640-44ce-a26d-15d2dca656f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 systemd[1]: Started Virtual Machine qemu-29-instance-0000001d.
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.510 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[791878ee-e0dd-4c46-8c68-f15a1eacffd5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 systemd-udevd[281226]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.541 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[49052cef-8d45-4017-916f-0c184714ec6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.5498] manager: (tap2920aa7a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/146)
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.549 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a311f240-cfc8-4a19-b01a-9db853a22ddd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.5505] device (tap3cee184b-35): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.5510] device (tap3cee184b-35): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:31:10 compute-0 systemd-udevd[281231]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.582 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[48cd0bd3-e5b1-4c4d-a3c5-e26efc8e04e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.585 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[5c89fc6f-fcc7-4359-ba10-b63247067d0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.6049] device (tap2920aa7a-a0): carrier: link connected
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.617 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[c35c0975-5d59-4d93-a3df-38d33b7ff08a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.634 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e8734258-55b3-450b-ad35-c87f699521d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500928, 'reachable_time': 23791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 281254, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.651 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[86561c46-f590-4757-b6bb-bfb6489a8c68]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4f:807b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 500928, 'tstamp': 500928}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 281255, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.660 243708 DEBUG nova.compute.manager [req-58f3c130-218a-40e4-8d9b-d507a3838f52 req-57be321a-5e3e-441a-b062-05c2b2a7f730 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.660 243708 DEBUG oslo_concurrency.lockutils [req-58f3c130-218a-40e4-8d9b-d507a3838f52 req-57be321a-5e3e-441a-b062-05c2b2a7f730 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.661 243708 DEBUG oslo_concurrency.lockutils [req-58f3c130-218a-40e4-8d9b-d507a3838f52 req-57be321a-5e3e-441a-b062-05c2b2a7f730 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.661 243708 DEBUG oslo_concurrency.lockutils [req-58f3c130-218a-40e4-8d9b-d507a3838f52 req-57be321a-5e3e-441a-b062-05c2b2a7f730 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.661 243708 DEBUG nova.compute.manager [req-58f3c130-218a-40e4-8d9b-d507a3838f52 req-57be321a-5e3e-441a-b062-05c2b2a7f730 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Processing event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.668 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[22e127e8-e3fe-4403-a9bc-519a33b05f6b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2920aa7a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4f:80:7b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 91], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500928, 'reachable_time': 23791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 281256, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.703 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[ae93f348-0172-487d-8a5f-adb6f8917669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.762 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[43476906-62e4-4677-a1bb-97598c69efac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.764 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.766 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.767 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2920aa7a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:10 compute-0 kernel: tap2920aa7a-a0: entered promiscuous mode
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.769 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 NetworkManager[48899]: <info>  [1765600270.7701] manager: (tap2920aa7a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.771 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.774 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2920aa7a-a0, col_values=(('external_ids', {'iface-id': 'ccd83819-bc00-4ecd-ab1d-315a75379aaa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.776 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 ovn_controller[145204]: 2025-12-13T04:31:10Z|00271|binding|INFO|Releasing lport ccd83819-bc00-4ecd-ab1d-315a75379aaa from this chassis (sb_readonly=0)
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.778 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.776 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.779 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d0226934-35ba-40e2-92c0-fdbf6d5ab1e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.780 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/2920aa7a-a9cb-45da-a971-38a7ffed2fca.pid.haproxy
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 2920aa7a-a9cb-45da-a971-38a7ffed2fca
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:31:10 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:10.782 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'env', 'PROCESS_TAG=haproxy-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2920aa7a-a9cb-45da-a971-38a7ffed2fca.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:31:10 compute-0 nova_compute[243704]: 2025-12-13 04:31:10.788 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:11 compute-0 podman[281324]: 2025-12-13 04:31:11.20289023 +0000 UTC m=+0.068511444 container create e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:31:11 compute-0 podman[281324]: 2025-12-13 04:31:11.163576521 +0000 UTC m=+0.029197735 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:31:11 compute-0 systemd[1]: Started libpod-conmon-e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78.scope.
Dec 13 04:31:11 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98955b7703e2783d4fe9464caa65ea4bba98b150d27306e2276d26b910ae96cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:11 compute-0 podman[281324]: 2025-12-13 04:31:11.345012173 +0000 UTC m=+0.210633407 container init e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Dec 13 04:31:11 compute-0 podman[281324]: 2025-12-13 04:31:11.35225255 +0000 UTC m=+0.217873744 container start e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:31:11 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [NOTICE]   (281344) : New worker (281346) forked
Dec 13 04:31:11 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [NOTICE]   (281344) : Loading success.
Dec 13 04:31:11 compute-0 ceph-mon[75071]: pgmap v1899: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1900: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.734 243708 DEBUG nova.compute.manager [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.734 243708 DEBUG oslo_concurrency.lockutils [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.735 243708 DEBUG oslo_concurrency.lockutils [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.736 243708 DEBUG oslo_concurrency.lockutils [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.736 243708 DEBUG nova.compute.manager [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] No waiting events found dispatching network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:31:12 compute-0 nova_compute[243704]: 2025-12-13 04:31:12.737 243708 WARNING nova.compute.manager [req-c6ab1a38-ccee-476e-9a63-76583da020bf req-f2525fa6-669f-466f-863a-3d0864ed0633 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received unexpected event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c for instance with vm_state building and task_state spawning.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.198 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600273.1980858, 40983263-6826-4731-ac5e-96d549b1e08c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.199 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] VM Started (Lifecycle Event)
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.201 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.205 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.208 243708 INFO nova.virt.libvirt.driver [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Instance spawned successfully.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.209 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.223 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.230 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.236 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.237 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.238 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.238 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.238 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.239 243708 DEBUG nova.virt.libvirt.driver [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.282 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.282 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600273.1981966, 40983263-6826-4731-ac5e-96d549b1e08c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.282 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] VM Paused (Lifecycle Event)
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.312 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.316 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600273.2048233, 40983263-6826-4731-ac5e-96d549b1e08c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.317 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] VM Resumed (Lifecycle Event)
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.342 243708 INFO nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Took 8.11 seconds to spawn the instance on the hypervisor.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.343 243708 DEBUG nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.345 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.356 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.395 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.418 243708 INFO nova.compute.manager [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Took 10.47 seconds to build instance.
Dec 13 04:31:13 compute-0 nova_compute[243704]: 2025-12-13 04:31:13.444 243708 DEBUG oslo_concurrency.lockutils [None req-b29158bc-279e-4e64-94d6-a1a925c6251d deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:13 compute-0 ceph-mon[75071]: pgmap v1900: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:31:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 511 B/s wr, 9 op/s
Dec 13 04:31:14 compute-0 nova_compute[243704]: 2025-12-13 04:31:14.217 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:14 compute-0 nova_compute[243704]: 2025-12-13 04:31:14.529 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:15 compute-0 ceph-mon[75071]: pgmap v1901: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 511 B/s wr, 9 op/s
Dec 13 04:31:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1902: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 12 op/s
Dec 13 04:31:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 12 op/s
Dec 13 04:31:18 compute-0 ceph-mon[75071]: pgmap v1902: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 12 op/s
Dec 13 04:31:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.221 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.532 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:19 compute-0 ceph-mon[75071]: pgmap v1903: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 12 KiB/s wr, 12 op/s
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.650 243708 DEBUG nova.compute.manager [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-changed-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.650 243708 DEBUG nova.compute.manager [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Refreshing instance network info cache due to event network-changed-3cee184b-357f-406f-84a1-89d14094072c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.651 243708 DEBUG oslo_concurrency.lockutils [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.651 243708 DEBUG oslo_concurrency.lockutils [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:31:19 compute-0 nova_compute[243704]: 2025-12-13 04:31:19.651 243708 DEBUG nova.network.neutron [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Refreshing network info cache for port 3cee184b-357f-406f-84a1-89d14094072c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:31:19 compute-0 sudo[281361]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:31:19 compute-0 sudo[281361]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:19 compute-0 sudo[281361]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:20 compute-0 sudo[281386]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:31:20 compute-0 sudo[281386]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1904: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:31:20 compute-0 sudo[281386]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:31:20 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:31:20 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:20 compute-0 sudo[281445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:31:20 compute-0 sudo[281445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:20 compute-0 sudo[281445]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:20 compute-0 sudo[281470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:31:20 compute-0 sudo[281470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.346602989 +0000 UTC m=+0.062208144 container create fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Dec 13 04:31:21 compute-0 systemd[1]: Started libpod-conmon-fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc.scope.
Dec 13 04:31:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.327497629 +0000 UTC m=+0.043102854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.441080587 +0000 UTC m=+0.156685762 container init fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.44744057 +0000 UTC m=+0.163045735 container start fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.45151036 +0000 UTC m=+0.167115515 container attach fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:21 compute-0 xenodochial_panini[281525]: 167 167
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.452518877 +0000 UTC m=+0.168124032 container died fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:21 compute-0 systemd[1]: libpod-fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc.scope: Deactivated successfully.
Dec 13 04:31:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-4b0837b8711d39925894b081d7aee1de39e8ec44a7e2d4d4943c6b85147d1203-merged.mount: Deactivated successfully.
Dec 13 04:31:21 compute-0 podman[281509]: 2025-12-13 04:31:21.503175366 +0000 UTC m=+0.218780521 container remove fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:31:21 compute-0 systemd[1]: libpod-conmon-fcbb11022e0454cadbe1bf3df9e2f9ffcad871742d6006c01b7273c034d85efc.scope: Deactivated successfully.
Dec 13 04:31:21 compute-0 ceph-mon[75071]: pgmap v1904: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:31:21 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:31:21 compute-0 podman[281549]: 2025-12-13 04:31:21.723459154 +0000 UTC m=+0.043319968 container create 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:31:21 compute-0 systemd[1]: Started libpod-conmon-005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24.scope.
Dec 13 04:31:21 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:21 compute-0 podman[281549]: 2025-12-13 04:31:21.70527393 +0000 UTC m=+0.025134764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:21 compute-0 podman[281549]: 2025-12-13 04:31:21.809181235 +0000 UTC m=+0.129042079 container init 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:31:21 compute-0 podman[281549]: 2025-12-13 04:31:21.817855571 +0000 UTC m=+0.137716375 container start 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:31:21 compute-0 podman[281549]: 2025-12-13 04:31:21.821352226 +0000 UTC m=+0.141213040 container attach 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:31:22 compute-0 optimistic_swanson[281566]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:31:22 compute-0 optimistic_swanson[281566]: --> All data devices are unavailable
Dec 13 04:31:22 compute-0 systemd[1]: libpod-005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24.scope: Deactivated successfully.
Dec 13 04:31:22 compute-0 podman[281549]: 2025-12-13 04:31:22.347725207 +0000 UTC m=+0.667586081 container died 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Dec 13 04:31:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-d6744dd8cdb122261a2eb389fbcb816b7c4ca59521bf20bc5f813d7d12457a46-merged.mount: Deactivated successfully.
Dec 13 04:31:22 compute-0 podman[281549]: 2025-12-13 04:31:22.411966934 +0000 UTC m=+0.731827778 container remove 005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:31:22 compute-0 systemd[1]: libpod-conmon-005bfa13214a982be3f532e8c1759a9b6317362a694a2a727beffabc466f5a24.scope: Deactivated successfully.
Dec 13 04:31:22 compute-0 sudo[281470]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:22 compute-0 sudo[281597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:31:22 compute-0 sudo[281597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:22 compute-0 sudo[281597]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:22 compute-0 nova_compute[243704]: 2025-12-13 04:31:22.600 243708 DEBUG nova.network.neutron [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updated VIF entry in instance network info cache for port 3cee184b-357f-406f-84a1-89d14094072c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:31:22 compute-0 nova_compute[243704]: 2025-12-13 04:31:22.601 243708 DEBUG nova.network.neutron [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating instance_info_cache with network_info: [{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:31:22 compute-0 sudo[281622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:31:22 compute-0 sudo[281622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:22 compute-0 nova_compute[243704]: 2025-12-13 04:31:22.617 243708 DEBUG oslo_concurrency.lockutils [req-34254a84-b59a-4b08-80c8-fce66e1587d8 req-3e4d060b-6467-411b-8669-208da09d1d04 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:31:22 compute-0 podman[281660]: 2025-12-13 04:31:22.903840337 +0000 UTC m=+0.046904206 container create 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Dec 13 04:31:22 compute-0 systemd[1]: Started libpod-conmon-57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d.scope.
Dec 13 04:31:22 compute-0 podman[281659]: 2025-12-13 04:31:22.945071637 +0000 UTC m=+0.083132420 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec 13 04:31:22 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:22 compute-0 podman[281660]: 2025-12-13 04:31:22.886846844 +0000 UTC m=+0.029910743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:22 compute-0 podman[281660]: 2025-12-13 04:31:22.991302785 +0000 UTC m=+0.134366704 container init 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:22 compute-0 podman[281660]: 2025-12-13 04:31:22.998714346 +0000 UTC m=+0.141778215 container start 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:31:23 compute-0 podman[281660]: 2025-12-13 04:31:23.002579301 +0000 UTC m=+0.145643200 container attach 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Dec 13 04:31:23 compute-0 peaceful_cohen[281699]: 167 167
Dec 13 04:31:23 compute-0 systemd[1]: libpod-57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d.scope: Deactivated successfully.
Dec 13 04:31:23 compute-0 conmon[281699]: conmon 57fa25423008961fc68c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d.scope/container/memory.events
Dec 13 04:31:23 compute-0 podman[281660]: 2025-12-13 04:31:23.005562522 +0000 UTC m=+0.148626381 container died 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Dec 13 04:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-95387e1c918bfb0d361205e3dacdc9a75215ebab1ce63887010ee648d5c34630-merged.mount: Deactivated successfully.
Dec 13 04:31:23 compute-0 podman[281660]: 2025-12-13 04:31:23.041073578 +0000 UTC m=+0.184137447 container remove 57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:31:23 compute-0 systemd[1]: libpod-conmon-57fa25423008961fc68c0f2bc41c99bc0ce179fc8e1d117500dc83462e1b7b8d.scope: Deactivated successfully.
Dec 13 04:31:23 compute-0 podman[281725]: 2025-12-13 04:31:23.24493003 +0000 UTC m=+0.043111403 container create e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:31:23 compute-0 systemd[1]: Started libpod-conmon-e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc.scope.
Dec 13 04:31:23 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c897425f7f3611df730fa872bb83e3fb17ff25db5776a0ad7210b9b63b724721/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c897425f7f3611df730fa872bb83e3fb17ff25db5776a0ad7210b9b63b724721/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c897425f7f3611df730fa872bb83e3fb17ff25db5776a0ad7210b9b63b724721/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c897425f7f3611df730fa872bb83e3fb17ff25db5776a0ad7210b9b63b724721/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:23 compute-0 podman[281725]: 2025-12-13 04:31:23.226003306 +0000 UTC m=+0.024184699 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:23 compute-0 podman[281725]: 2025-12-13 04:31:23.321295137 +0000 UTC m=+0.119476530 container init e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:31:23 compute-0 podman[281725]: 2025-12-13 04:31:23.327112135 +0000 UTC m=+0.125293508 container start e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:23 compute-0 podman[281725]: 2025-12-13 04:31:23.330494527 +0000 UTC m=+0.128675930 container attach e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:23 compute-0 competent_franklin[281742]: {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     "0": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "devices": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "/dev/loop3"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             ],
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_name": "ceph_lv0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_size": "21470642176",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "name": "ceph_lv0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "tags": {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_name": "ceph",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.crush_device_class": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.encrypted": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.objectstore": "bluestore",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_id": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.vdo": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.with_tpm": "0"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             },
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "vg_name": "ceph_vg0"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         }
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     ],
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     "1": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "devices": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "/dev/loop4"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             ],
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_name": "ceph_lv1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_size": "21470642176",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "name": "ceph_lv1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "tags": {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_name": "ceph",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.crush_device_class": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.encrypted": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.objectstore": "bluestore",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_id": "1",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.vdo": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.with_tpm": "0"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             },
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "vg_name": "ceph_vg1"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         }
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     ],
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     "2": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "devices": [
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "/dev/loop5"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             ],
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_name": "ceph_lv2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_size": "21470642176",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "name": "ceph_lv2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "tags": {
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.cluster_name": "ceph",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.crush_device_class": "",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.encrypted": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.objectstore": "bluestore",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osd_id": "2",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.vdo": "0",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:                 "ceph.with_tpm": "0"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             },
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "type": "block",
Dec 13 04:31:23 compute-0 competent_franklin[281742]:             "vg_name": "ceph_vg2"
Dec 13 04:31:23 compute-0 competent_franklin[281742]:         }
Dec 13 04:31:23 compute-0 competent_franklin[281742]:     ]
Dec 13 04:31:23 compute-0 competent_franklin[281742]: }
Dec 13 04:31:23 compute-0 ceph-mon[75071]: pgmap v1905: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Dec 13 04:31:23 compute-0 systemd[1]: libpod-e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc.scope: Deactivated successfully.
Dec 13 04:31:23 compute-0 conmon[281742]: conmon e626745b4f5af655c061 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc.scope/container/memory.events
Dec 13 04:31:23 compute-0 podman[281751]: 2025-12-13 04:31:23.677649046 +0000 UTC m=+0.026958014 container died e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Dec 13 04:31:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-c897425f7f3611df730fa872bb83e3fb17ff25db5776a0ad7210b9b63b724721-merged.mount: Deactivated successfully.
Dec 13 04:31:23 compute-0 podman[281751]: 2025-12-13 04:31:23.712199075 +0000 UTC m=+0.061508023 container remove e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:31:23 compute-0 systemd[1]: libpod-conmon-e626745b4f5af655c061a7c23bca39e3069d1da8ae7a1f23c18a7f1e2c6103bc.scope: Deactivated successfully.
Dec 13 04:31:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:23 compute-0 sudo[281622]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:23 compute-0 sudo[281766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:31:23 compute-0 sudo[281766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:23 compute-0 sudo[281766]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:23 compute-0 sudo[281791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:31:23 compute-0 sudo[281791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.897 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.897 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.897 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.898 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:31:23 compute-0 nova_compute[243704]: 2025-12-13 04:31:23.898 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1906: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.190934831 +0000 UTC m=+0.042201778 container create cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.222 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:24 compute-0 systemd[1]: Started libpod-conmon-cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6.scope.
Dec 13 04:31:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.264898902 +0000 UTC m=+0.116165869 container init cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.174282698 +0000 UTC m=+0.025549685 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.271547463 +0000 UTC m=+0.122814410 container start cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.274807622 +0000 UTC m=+0.126074589 container attach cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Dec 13 04:31:24 compute-0 fervent_kalam[281865]: 167 167
Dec 13 04:31:24 compute-0 systemd[1]: libpod-cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6.scope: Deactivated successfully.
Dec 13 04:31:24 compute-0 conmon[281865]: conmon cc87131c807dfaee1bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6.scope/container/memory.events
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.278016249 +0000 UTC m=+0.129283196 container died cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Dec 13 04:31:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-051cd36b7d6f0420f893c2280c2f33c8784320488895cec5e5fe70d67e6d528c-merged.mount: Deactivated successfully.
Dec 13 04:31:24 compute-0 podman[281849]: 2025-12-13 04:31:24.315494477 +0000 UTC m=+0.166761424 container remove cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:31:24 compute-0 systemd[1]: libpod-conmon-cc87131c807dfaee1bbef706c31dc97f158436b557420998dd0ad1a03163e7b6.scope: Deactivated successfully.
Dec 13 04:31:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:24 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2221539722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:24 compute-0 podman[281888]: 2025-12-13 04:31:24.504035064 +0000 UTC m=+0.047146693 container create 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.532 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.634s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.534 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:24 compute-0 systemd[1]: Started libpod-conmon-589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201.scope.
Dec 13 04:31:24 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c705b5739bfe50a07f0f52374293a7f0d110416bc91dc275b7bf8ac13dad87f0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:24 compute-0 podman[281888]: 2025-12-13 04:31:24.485679164 +0000 UTC m=+0.028790823 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c705b5739bfe50a07f0f52374293a7f0d110416bc91dc275b7bf8ac13dad87f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c705b5739bfe50a07f0f52374293a7f0d110416bc91dc275b7bf8ac13dad87f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c705b5739bfe50a07f0f52374293a7f0d110416bc91dc275b7bf8ac13dad87f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:31:24 compute-0 podman[281888]: 2025-12-13 04:31:24.593145117 +0000 UTC m=+0.136256766 container init 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:31:24 compute-0 podman[281888]: 2025-12-13 04:31:24.605112572 +0000 UTC m=+0.148224241 container start 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default)
Dec 13 04:31:24 compute-0 podman[281888]: 2025-12-13 04:31:24.609034658 +0000 UTC m=+0.152146307 container attach 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:31:24 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2221539722' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.616 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.617 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.791 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.792 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4071MB free_disk=59.98790785577148GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.793 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.793 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.859 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 40983263-6826-4731-ac5e-96d549b1e08c actively managed on this compute host and has allocations in placement: {'resources': {'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.860 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.860 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:31:24 compute-0 nova_compute[243704]: 2025-12-13 04:31:24.905 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:25 compute-0 lvm[282005]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:31:25 compute-0 lvm[282005]: VG ceph_vg1 finished
Dec 13 04:31:25 compute-0 lvm[282004]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:31:25 compute-0 lvm[282004]: VG ceph_vg0 finished
Dec 13 04:31:25 compute-0 lvm[282007]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:31:25 compute-0 lvm[282007]: VG ceph_vg2 finished
Dec 13 04:31:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277404930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:25 compute-0 nova_compute[243704]: 2025-12-13 04:31:25.493 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.588s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:25 compute-0 nova_compute[243704]: 2025-12-13 04:31:25.501 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:31:25 compute-0 nova_compute[243704]: 2025-12-13 04:31:25.513 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:31:25 compute-0 nova_compute[243704]: 2025-12-13 04:31:25.532 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:31:25 compute-0 nova_compute[243704]: 2025-12-13 04:31:25.532 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:25 compute-0 dazzling_almeida[281905]: {}
Dec 13 04:31:25 compute-0 systemd[1]: libpod-589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201.scope: Deactivated successfully.
Dec 13 04:31:25 compute-0 systemd[1]: libpod-589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201.scope: Consumed 1.506s CPU time.
Dec 13 04:31:25 compute-0 podman[281888]: 2025-12-13 04:31:25.599531488 +0000 UTC m=+1.142643137 container died 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Dec 13 04:31:25 compute-0 ceph-mon[75071]: pgmap v1906: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Dec 13 04:31:25 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1277404930' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c705b5739bfe50a07f0f52374293a7f0d110416bc91dc275b7bf8ac13dad87f0-merged.mount: Deactivated successfully.
Dec 13 04:31:25 compute-0 podman[281888]: 2025-12-13 04:31:25.658935233 +0000 UTC m=+1.202046902 container remove 589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_almeida, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:31:25 compute-0 systemd[1]: libpod-conmon-589be5a8672176389d672f2ec8bf3f7d6cd0b00761efde509a2c9282dad26201.scope: Deactivated successfully.
Dec 13 04:31:25 compute-0 sudo[281791]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:31:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:31:25 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:25 compute-0 sudo[282023]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:31:25 compute-0 sudo[282023]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:31:25 compute-0 sudo[282023]: pam_unix(sudo:session): session closed for user root
Dec 13 04:31:25 compute-0 ovn_controller[145204]: 2025-12-13T04:31:25Z|00072|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.4
Dec 13 04:31:25 compute-0 ovn_controller[145204]: 2025-12-13T04:31:25Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b4:c1:48 10.100.0.4
Dec 13 04:31:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 67 op/s
Dec 13 04:31:26 compute-0 nova_compute[243704]: 2025-12-13 04:31:26.527 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:26 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:31:26 compute-0 nova_compute[243704]: 2025-12-13 04:31:26.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:26 compute-0 nova_compute[243704]: 2025-12-13 04:31:26.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:31:26 compute-0 nova_compute[243704]: 2025-12-13 04:31:26.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:31:27 compute-0 nova_compute[243704]: 2025-12-13 04:31:27.604 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:31:27 compute-0 nova_compute[243704]: 2025-12-13 04:31:27.604 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:31:27 compute-0 nova_compute[243704]: 2025-12-13 04:31:27.604 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:31:27 compute-0 nova_compute[243704]: 2025-12-13 04:31:27.605 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 40983263-6826-4731-ac5e-96d549b1e08c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:31:27 compute-0 ceph-mon[75071]: pgmap v1907: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 67 op/s
Dec 13 04:31:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 85 B/s wr, 64 op/s
Dec 13 04:31:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:29 compute-0 ceph-mon[75071]: pgmap v1908: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 85 B/s wr, 64 op/s
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.148 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating instance_info_cache with network_info: [{"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.165 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-40983263-6826-4731-ac5e-96d549b1e08c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.166 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.168 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.168 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.225 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.573 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:29 compute-0 nova_compute[243704]: 2025-12-13 04:31:29.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:29 compute-0 ovn_controller[145204]: 2025-12-13T04:31:29Z|00074|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.14 does not match offer 10.100.0.4
Dec 13 04:31:29 compute-0 ovn_controller[145204]: 2025-12-13T04:31:29Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:b4:c1:48 10.100.0.4
Dec 13 04:31:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 KiB/s wr, 104 op/s
Dec 13 04:31:30 compute-0 ovn_controller[145204]: 2025-12-13T04:31:30Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b4:c1:48 10.100.0.4
Dec 13 04:31:30 compute-0 ovn_controller[145204]: 2025-12-13T04:31:30Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b4:c1:48 10.100.0.4
Dec 13 04:31:31 compute-0 ceph-mon[75071]: pgmap v1909: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 KiB/s wr, 104 op/s
Dec 13 04:31:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1910: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.5 KiB/s wr, 43 op/s
Dec 13 04:31:32 compute-0 nova_compute[243704]: 2025-12-13 04:31:32.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:32 compute-0 nova_compute[243704]: 2025-12-13 04:31:32.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:32 compute-0 ceph-mon[75071]: pgmap v1910: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 7.5 KiB/s wr, 43 op/s
Dec 13 04:31:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:33 compute-0 nova_compute[243704]: 2025-12-13 04:31:33.875 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:31:33 compute-0 nova_compute[243704]: 2025-12-13 04:31:33.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:31:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Dec 13 04:31:34 compute-0 nova_compute[243704]: 2025-12-13 04:31:34.265 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:34 compute-0 nova_compute[243704]: 2025-12-13 04:31:34.576 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:34 compute-0 podman[282048]: 2025-12-13 04:31:34.980157721 +0000 UTC m=+0.101888101 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:31:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:35.109 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:35.110 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:35.112 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:35 compute-0 ceph-mon[75071]: pgmap v1911: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 603 KiB/s rd, 21 KiB/s wr, 44 op/s
Dec 13 04:31:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 21 KiB/s wr, 43 op/s
Dec 13 04:31:37 compute-0 ceph-mon[75071]: pgmap v1912: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 601 KiB/s rd, 21 KiB/s wr, 43 op/s
Dec 13 04:31:37 compute-0 podman[282067]: 2025-12-13 04:31:37.983628431 +0000 UTC m=+0.126750617 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:31:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1913: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 21 KiB/s wr, 40 op/s
Dec 13 04:31:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:38 compute-0 ceph-mon[75071]: pgmap v1913: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 21 KiB/s wr, 40 op/s
Dec 13 04:31:39 compute-0 nova_compute[243704]: 2025-12-13 04:31:39.270 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:39 compute-0 nova_compute[243704]: 2025-12-13 04:31:39.578 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 25 KiB/s wr, 41 op/s
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:31:40
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'backups', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms']
Dec 13 04:31:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:31:41 compute-0 ceph-mon[75071]: pgmap v1914: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 586 KiB/s rd, 25 KiB/s wr, 41 op/s
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s wr, 0 op/s
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:31:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:31:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:31:43 compute-0 ceph-mon[75071]: pgmap v1915: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s wr, 0 op/s
Dec 13 04:31:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Dec 13 04:31:44 compute-0 nova_compute[243704]: 2025-12-13 04:31:44.331 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:44 compute-0 nova_compute[243704]: 2025-12-13 04:31:44.580 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:45 compute-0 ceph-mon[75071]: pgmap v1916: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s wr, 2 op/s
Dec 13 04:31:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:31:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3971344546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:31:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:31:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3971344546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:31:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3971344546' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:31:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3971344546' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:31:47 compute-0 ovn_controller[145204]: 2025-12-13T04:31:47Z|00272|memory_trim|INFO|Detected inactivity (last active 30024 ms ago): trimming memory
Dec 13 04:31:47 compute-0 ceph-mon[75071]: pgmap v1917: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:49 compute-0 nova_compute[243704]: 2025-12-13 04:31:49.352 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:49 compute-0 ceph-mon[75071]: pgmap v1918: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:49 compute-0 nova_compute[243704]: 2025-12-13 04:31:49.581 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:51 compute-0 ceph-mon[75071]: pgmap v1919: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s wr, 1 op/s
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.803 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.803 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.803 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.804 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.804 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.806 243708 INFO nova.compute.manager [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Terminating instance
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.808 243708 DEBUG nova.compute.manager [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:31:51 compute-0 kernel: tap3cee184b-35 (unregistering): left promiscuous mode
Dec 13 04:31:51 compute-0 NetworkManager[48899]: <info>  [1765600311.8908] device (tap3cee184b-35): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:31:51 compute-0 ovn_controller[145204]: 2025-12-13T04:31:51Z|00273|binding|INFO|Releasing lport 3cee184b-357f-406f-84a1-89d14094072c from this chassis (sb_readonly=0)
Dec 13 04:31:51 compute-0 ovn_controller[145204]: 2025-12-13T04:31:51Z|00274|binding|INFO|Setting lport 3cee184b-357f-406f-84a1-89d14094072c down in Southbound
Dec 13 04:31:51 compute-0 ovn_controller[145204]: 2025-12-13T04:31:51Z|00275|binding|INFO|Removing iface tap3cee184b-35 ovn-installed in OVS
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.905 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.909 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:51.914 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c1:48 10.100.0.4'], port_security=['fa:16:3e:b4:c1:48 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '40983263-6826-4731-ac5e-96d549b1e08c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '43c4864e9f844459a882a9e3d0fe477b', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c965583c-d5da-4d08-bde9-d6826733374f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1694f715-18b8-4b37-ba0b-3d969d010dc9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=3cee184b-357f-406f-84a1-89d14094072c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:31:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:51.916 154842 INFO neutron.agent.ovn.metadata.agent [-] Port 3cee184b-357f-406f-84a1-89d14094072c in datapath 2920aa7a-a9cb-45da-a971-38a7ffed2fca unbound from our chassis
Dec 13 04:31:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:51.917 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2920aa7a-a9cb-45da-a971-38a7ffed2fca, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:31:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:51.919 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a0b63216-18a9-4ca6-bef8-60df3be64a3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:51 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:51.919 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca namespace which is not needed anymore
Dec 13 04:31:51 compute-0 nova_compute[243704]: 2025-12-13 04:31:51.931 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:51 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Dec 13 04:31:51 compute-0 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d0000001d.scope: Consumed 16.168s CPU time.
Dec 13 04:31:51 compute-0 systemd-machined[206767]: Machine qemu-29-instance-0000001d terminated.
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.053 243708 INFO nova.virt.libvirt.driver [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Instance destroyed successfully.
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.054 243708 DEBUG nova.objects.instance [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lazy-loading 'resources' on Instance uuid 40983263-6826-4731-ac5e-96d549b1e08c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.067 243708 DEBUG nova.virt.libvirt.vif [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:31:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TransferEncryptedVolumeTest-server-748393203',display_name='tempest-TransferEncryptedVolumeTest-server-748393203',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-transferencryptedvolumetest-server-748393203',id=29,image_ref='',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLMoL15I/Bj1iCzujpy2fbJQaP8acoTe69CnGcFY4LdTIJ3D0h1Pc4a7CTYA01uL0z/8kDMYoWefYR6Gi1xv52wWtzjltq0ikSXKbeZ2P8eIjJy+bgEJfSTXKzCSQo26mw==',key_name='tempest-TransferEncryptedVolumeTest-760155197',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:31:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='43c4864e9f844459a882a9e3d0fe477b',ramdisk_id='',reservation_id='r-zpbw71m4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_signature_verified='False',owner_project_name='tempest-TransferEncryptedVolumeTest-1412293480',owner_user_name='tempest-TransferEncryptedVolumeTest-1412293480-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:31:13Z,user_data=None,user_id='deba56fa45214f28a3aab4d031dc4155',uuid=40983263-6826-4731-ac5e-96d549b1e08c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.069 243708 DEBUG nova.network.os_vif_util [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converting VIF {"id": "3cee184b-357f-406f-84a1-89d14094072c", "address": "fa:16:3e:b4:c1:48", "network": {"id": "2920aa7a-a9cb-45da-a971-38a7ffed2fca", "bridge": "br-int", "label": "tempest-TransferEncryptedVolumeTest-213407131-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "43c4864e9f844459a882a9e3d0fe477b", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cee184b-35", "ovs_interfaceid": "3cee184b-357f-406f-84a1-89d14094072c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.070 243708 DEBUG nova.network.os_vif_util [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.071 243708 DEBUG os_vif [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.074 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.075 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cee184b-35, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.077 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.080 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.084 243708 INFO os_vif [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c1:48,bridge_name='br-int',has_traffic_filtering=True,id=3cee184b-357f-406f-84a1-89d14094072c,network=Network(2920aa7a-a9cb-45da-a971-38a7ffed2fca),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cee184b-35')
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [NOTICE]   (281344) : haproxy version is 2.8.14-c23fe91
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [NOTICE]   (281344) : path to executable is /usr/sbin/haproxy
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [WARNING]  (281344) : Exiting Master process...
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [WARNING]  (281344) : Exiting Master process...
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [ALERT]    (281344) : Current worker (281346) exited with code 143 (Terminated)
Dec 13 04:31:52 compute-0 neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca[281340]: [WARNING]  (281344) : All workers exited. Exiting... (0)
Dec 13 04:31:52 compute-0 systemd[1]: libpod-e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78.scope: Deactivated successfully.
Dec 13 04:31:52 compute-0 podman[282112]: 2025-12-13 04:31:52.098333995 +0000 UTC m=+0.068669429 container died e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78-userdata-shm.mount: Deactivated successfully.
Dec 13 04:31:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-98955b7703e2783d4fe9464caa65ea4bba98b150d27306e2276d26b910ae96cd-merged.mount: Deactivated successfully.
Dec 13 04:31:52 compute-0 podman[282112]: 2025-12-13 04:31:52.138375833 +0000 UTC m=+0.108711277 container cleanup e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec 13 04:31:52 compute-0 systemd[1]: libpod-conmon-e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78.scope: Deactivated successfully.
Dec 13 04:31:52 compute-0 podman[282168]: 2025-12-13 04:31:52.206339802 +0000 UTC m=+0.039853446 container remove e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.214 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[945d04f3-e8a7-497a-8aa6-8a0e362c09a7]: (4, ('Sat Dec 13 04:31:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78)\ne2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78\nSat Dec 13 04:31:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca (e2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78)\ne2893f0125e14cfbf28a2b4f4bb776c853e4e99c81844efebe982a337c804a78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.215 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[977cd5b1-9279-4b5e-9269-2aadcc8142ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.216 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2920aa7a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:52 compute-0 kernel: tap2920aa7a-a0: left promiscuous mode
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.219 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.232 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.235 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[889c8163-1bda-4e89-a338-ddc0a852e402]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.247 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[385b7a2f-a764-4b21-8cd1-232801732635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.249 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[041a2b91-1fb7-4e84-8c92-e7767bb90622]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.261 243708 INFO nova.virt.libvirt.driver [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Deleting instance files /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c_del
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.262 243708 INFO nova.virt.libvirt.driver [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Deletion of /var/lib/nova/instances/40983263-6826-4731-ac5e-96d549b1e08c_del complete
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.265 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc97e34-f33d-4d13-86e8-972296d2aa07]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 500921, 'reachable_time': 27775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282184, 'error': None, 'target': 'ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.266 243708 DEBUG nova.compute.manager [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-unplugged-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.267 243708 DEBUG oslo_concurrency.lockutils [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.267 243708 DEBUG oslo_concurrency.lockutils [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.268 243708 DEBUG oslo_concurrency.lockutils [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.268 243708 DEBUG nova.compute.manager [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] No waiting events found dispatching network-vif-unplugged-3cee184b-357f-406f-84a1-89d14094072c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.268 243708 DEBUG nova.compute.manager [req-e8a9dbd6-9954-406a-9aa8-b2a0c2969663 req-e52bcd96-fb2b-4bf8-b4be-361ef5e4b3c9 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-unplugged-3cee184b-357f-406f-84a1-89d14094072c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.268 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2920aa7a-a9cb-45da-a971-38a7ffed2fca deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:31:52 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:52.269 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[38c136ba-30d5-47f3-98de-30b9964aaa21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:31:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d2920aa7a\x2da9cb\x2d45da\x2da971\x2d38a7ffed2fca.mount: Deactivated successfully.
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.306 243708 INFO nova.compute.manager [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Took 0.50 seconds to destroy the instance on the hypervisor.
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.307 243708 DEBUG oslo.service.loopingcall [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.307 243708 DEBUG nova.compute.manager [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:31:52 compute-0 nova_compute[243704]: 2025-12-13 04:31:52.308 243708 DEBUG nova.network.neutron [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.224173404062416e-06 of space, bias 1.0, pg target 0.001867252021218725 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.005453152171241327 of space, bias 1.0, pg target 1.635945651372398 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4694184535973517e-06 of space, bias 1.0, pg target 0.0007383561176256082 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667012821194281 of space, bias 1.0, pg target 0.199343683353709 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.310704469665261e-06 of space, bias 4.0, pg target 0.0015676025457196522 quantized to 16 (current 16)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011408172983004493 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012548990281304943 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:31:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Dec 13 04:31:53 compute-0 ceph-mon[75071]: pgmap v1920: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s wr, 1 op/s
Dec 13 04:31:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:53 compute-0 podman[282185]: 2025-12-13 04:31:53.983933671 +0000 UTC m=+0.123764136 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:31:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:54.015 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:31:54 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:54.016 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.016 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.064 243708 DEBUG nova.network.neutron [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.078 243708 INFO nova.compute.manager [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Took 1.77 seconds to deallocate network for instance.
Dec 13 04:31:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 11 KiB/s wr, 19 op/s
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.251 243708 INFO nova.compute.manager [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Took 0.17 seconds to detach 1 volumes for instance.
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.302 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.302 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.354 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.366 243708 DEBUG oslo_concurrency.processutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.406 243708 DEBUG nova.compute.manager [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.407 243708 DEBUG oslo_concurrency.lockutils [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "40983263-6826-4731-ac5e-96d549b1e08c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.408 243708 DEBUG oslo_concurrency.lockutils [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.408 243708 DEBUG oslo_concurrency.lockutils [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.409 243708 DEBUG nova.compute.manager [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] No waiting events found dispatching network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.410 243708 WARNING nova.compute.manager [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received unexpected event network-vif-plugged-3cee184b-357f-406f-84a1-89d14094072c for instance with vm_state deleted and task_state None.
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.410 243708 DEBUG nova.compute.manager [req-8aec4190-b4f3-4af7-8e10-299a973b4053 req-f83e815a-b6cc-4771-9bea-dc17fea34866 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Received event network-vif-deleted-3cee184b-357f-406f-84a1-89d14094072c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:31:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:31:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1749490403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.968 243708 DEBUG oslo_concurrency.processutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.977 243708 DEBUG nova.compute.provider_tree [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:31:54 compute-0 nova_compute[243704]: 2025-12-13 04:31:54.995 243708 DEBUG nova.scheduler.client.report [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:31:55 compute-0 nova_compute[243704]: 2025-12-13 04:31:55.027 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:55 compute-0 nova_compute[243704]: 2025-12-13 04:31:55.053 243708 INFO nova.scheduler.client.report [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Deleted allocations for instance 40983263-6826-4731-ac5e-96d549b1e08c
Dec 13 04:31:55 compute-0 nova_compute[243704]: 2025-12-13 04:31:55.128 243708 DEBUG oslo_concurrency.lockutils [None req-f7c90134-37e1-49c1-8e27-12800d1667ef deba56fa45214f28a3aab4d031dc4155 43c4864e9f844459a882a9e3d0fe477b - - default default] Lock "40983263-6826-4731-ac5e-96d549b1e08c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.325s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:31:55 compute-0 ceph-mon[75071]: pgmap v1921: 305 pgs: 305 active+clean; 453 MiB data, 789 MiB used, 59 GiB / 60 GiB avail; 211 KiB/s rd, 11 KiB/s wr, 19 op/s
Dec 13 04:31:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1749490403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:31:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:31:57 compute-0 nova_compute[243704]: 2025-12-13 04:31:57.078 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:57 compute-0 ceph-mon[75071]: pgmap v1922: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:31:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:31:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:31:59 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:31:59.020 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:31:59 compute-0 nova_compute[243704]: 2025-12-13 04:31:59.356 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:31:59 compute-0 ceph-mon[75071]: pgmap v1923: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:31:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:31:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811470328' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:31:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:31:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2811470328' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:32:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2811470328' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:32:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2811470328' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:32:01 compute-0 ceph-mon[75071]: pgmap v1924: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:02 compute-0 nova_compute[243704]: 2025-12-13 04:32:02.080 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:02 compute-0 ceph-mon[75071]: pgmap v1925: 305 pgs: 305 active+clean; 453 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 220 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 301 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Dec 13 04:32:04 compute-0 nova_compute[243704]: 2025-12-13 04:32:04.358 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:05 compute-0 ceph-mon[75071]: pgmap v1926: 305 pgs: 305 active+clean; 301 MiB data, 668 MiB used, 59 GiB / 60 GiB avail; 231 KiB/s rd, 1.2 KiB/s wr, 36 op/s
Dec 13 04:32:05 compute-0 podman[282233]: 2025-12-13 04:32:05.956276406 +0000 UTC m=+0.089912065 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:32:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 19 op/s
Dec 13 04:32:06 compute-0 nova_compute[243704]: 2025-12-13 04:32:06.319 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:06 compute-0 nova_compute[243704]: 2025-12-13 04:32:06.504 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:07 compute-0 nova_compute[243704]: 2025-12-13 04:32:07.051 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765600312.0504916, 40983263-6826-4731-ac5e-96d549b1e08c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:32:07 compute-0 nova_compute[243704]: 2025-12-13 04:32:07.052 243708 INFO nova.compute.manager [-] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] VM Stopped (Lifecycle Event)
Dec 13 04:32:07 compute-0 nova_compute[243704]: 2025-12-13 04:32:07.082 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:07 compute-0 nova_compute[243704]: 2025-12-13 04:32:07.095 243708 DEBUG nova.compute.manager [None req-b7789910-fb1f-4266-9071-916da2cc95d1 - - - - - -] [instance: 40983263-6826-4731-ac5e-96d549b1e08c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:32:07 compute-0 ceph-mon[75071]: pgmap v1927: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 596 B/s wr, 19 op/s
Dec 13 04:32:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:08 compute-0 podman[282252]: 2025-12-13 04:32:08.941408626 +0000 UTC m=+0.080722955 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec 13 04:32:09 compute-0 nova_compute[243704]: 2025-12-13 04:32:09.361 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:09 compute-0 ceph-mon[75071]: pgmap v1928: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 596 B/s wr, 18 op/s
Dec 13 04:32:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:11 compute-0 ceph-mon[75071]: pgmap v1929: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:12 compute-0 nova_compute[243704]: 2025-12-13 04:32:12.084 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:13 compute-0 ceph-mon[75071]: pgmap v1930: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:14 compute-0 nova_compute[243704]: 2025-12-13 04:32:14.394 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:15 compute-0 ceph-mon[75071]: pgmap v1931: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 597 B/s wr, 18 op/s
Dec 13 04:32:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 13 04:32:17 compute-0 nova_compute[243704]: 2025-12-13 04:32:17.086 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:18 compute-0 ceph-mon[75071]: pgmap v1932: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail; 0 B/s wr, 0 op/s
Dec 13 04:32:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:19 compute-0 ceph-mon[75071]: pgmap v1933: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:19 compute-0 nova_compute[243704]: 2025-12-13 04:32:19.396 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:21 compute-0 ceph-mon[75071]: pgmap v1934: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:22 compute-0 nova_compute[243704]: 2025-12-13 04:32:22.088 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:23 compute-0 ceph-mon[75071]: pgmap v1935: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.398 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.900 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.902 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:32:24 compute-0 nova_compute[243704]: 2025-12-13 04:32:24.902 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:24 compute-0 podman[282273]: 2025-12-13 04:32:24.943947746 +0000 UTC m=+0.091811288 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec 13 04:32:25 compute-0 ceph-mon[75071]: pgmap v1936: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:32:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/544114568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.457 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.655 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.656 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4278MB free_disk=59.988049106672406GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.656 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.656 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.732 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.732 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:32:25 compute-0 sudo[282324]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:32:25 compute-0 sudo[282324]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:25 compute-0 sudo[282324]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:25 compute-0 nova_compute[243704]: 2025-12-13 04:32:25.917 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:25 compute-0 sudo[282349]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:32:25 compute-0 sudo[282349]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/544114568' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208977117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:26 compute-0 nova_compute[243704]: 2025-12-13 04:32:26.477 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:26 compute-0 nova_compute[243704]: 2025-12-13 04:32:26.485 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:32:26 compute-0 nova_compute[243704]: 2025-12-13 04:32:26.509 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:32:26 compute-0 nova_compute[243704]: 2025-12-13 04:32:26.544 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:32:26 compute-0 nova_compute[243704]: 2025-12-13 04:32:26.545 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:26 compute-0 sudo[282349]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:32:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:32:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:26 compute-0 sudo[282427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:32:26 compute-0 sudo[282427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:26 compute-0 sudo[282427]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:26 compute-0 sudo[282452]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:32:26 compute-0 sudo[282452]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:27 compute-0 nova_compute[243704]: 2025-12-13 04:32:27.090 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.17489317 +0000 UTC m=+0.067044433 container create eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:27 compute-0 ceph-mon[75071]: pgmap v1937: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3208977117' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:32:27 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:32:27 compute-0 systemd[1]: Started libpod-conmon-eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a.scope.
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.142376697 +0000 UTC m=+0.034528000 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.291267625 +0000 UTC m=+0.183418938 container init eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.305715408 +0000 UTC m=+0.197866651 container start eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.309414658 +0000 UTC m=+0.201565921 container attach eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:27 compute-0 dreamy_mclaren[282505]: 167 167
Dec 13 04:32:27 compute-0 systemd[1]: libpod-eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a.scope: Deactivated successfully.
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.315115733 +0000 UTC m=+0.207266966 container died eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-5a94a131796be5be1794405392ffcb8fc4115ed1c956d84f2caa8702451567dd-merged.mount: Deactivated successfully.
Dec 13 04:32:27 compute-0 podman[282489]: 2025-12-13 04:32:27.364123296 +0000 UTC m=+0.256274529 container remove eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dreamy_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:32:27 compute-0 systemd[1]: libpod-conmon-eadaeb1125e9752140ca93ce546098936681f31acafc3a5b5e7dd0ceed33341a.scope: Deactivated successfully.
Dec 13 04:32:27 compute-0 nova_compute[243704]: 2025-12-13 04:32:27.540 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:27 compute-0 podman[282529]: 2025-12-13 04:32:27.607605626 +0000 UTC m=+0.064697911 container create e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:32:27 compute-0 systemd[1]: Started libpod-conmon-e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac.scope.
Dec 13 04:32:27 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:27 compute-0 podman[282529]: 2025-12-13 04:32:27.585757641 +0000 UTC m=+0.042849956 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:27 compute-0 podman[282529]: 2025-12-13 04:32:27.700786279 +0000 UTC m=+0.157878614 container init e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:27 compute-0 podman[282529]: 2025-12-13 04:32:27.709222708 +0000 UTC m=+0.166314993 container start e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Dec 13 04:32:27 compute-0 podman[282529]: 2025-12-13 04:32:27.712872677 +0000 UTC m=+0.169965002 container attach e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1938: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:28 compute-0 quizzical_grothendieck[282545]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:32:28 compute-0 quizzical_grothendieck[282545]: --> All data devices are unavailable
Dec 13 04:32:28 compute-0 systemd[1]: libpod-e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac.scope: Deactivated successfully.
Dec 13 04:32:28 compute-0 podman[282529]: 2025-12-13 04:32:28.255380757 +0000 UTC m=+0.712473102 container died e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-14d0de4ea191ea2aa9f585ac72639df5b356f6e400ba74dda4fedeb75c264af0-merged.mount: Deactivated successfully.
Dec 13 04:32:28 compute-0 podman[282529]: 2025-12-13 04:32:28.30219988 +0000 UTC m=+0.759292155 container remove e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_grothendieck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:28 compute-0 systemd[1]: libpod-conmon-e24a6b9d1474c0ed743071f84abbcc41a74735f0c61e9d1fdd9a618c08581bac.scope: Deactivated successfully.
Dec 13 04:32:28 compute-0 sudo[282452]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:28 compute-0 sudo[282577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:32:28 compute-0 sudo[282577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:28 compute-0 sudo[282577]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:28 compute-0 sudo[282602]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:32:28 compute-0 sudo[282602]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.731994125 +0000 UTC m=+0.049697092 container create af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:32:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:28 compute-0 systemd[1]: Started libpod-conmon-af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91.scope.
Dec 13 04:32:28 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.708407095 +0000 UTC m=+0.026110102 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.821053057 +0000 UTC m=+0.138756054 container init af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.831265614 +0000 UTC m=+0.148968581 container start af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.834589845 +0000 UTC m=+0.152292912 container attach af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:32:28 compute-0 brave_solomon[282655]: 167 167
Dec 13 04:32:28 compute-0 systemd[1]: libpod-af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91.scope: Deactivated successfully.
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.838862081 +0000 UTC m=+0.156565048 container died af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Dec 13 04:32:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-4217da6341d41f18cea2ecfdc0de5ac2ca8e18f3eb0930df2d656ce10f6b6837-merged.mount: Deactivated successfully.
Dec 13 04:32:28 compute-0 nova_compute[243704]: 2025-12-13 04:32:28.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:28 compute-0 nova_compute[243704]: 2025-12-13 04:32:28.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:32:28 compute-0 nova_compute[243704]: 2025-12-13 04:32:28.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:32:28 compute-0 podman[282639]: 2025-12-13 04:32:28.887005569 +0000 UTC m=+0.204708526 container remove af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Dec 13 04:32:28 compute-0 nova_compute[243704]: 2025-12-13 04:32:28.891 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:32:28 compute-0 nova_compute[243704]: 2025-12-13 04:32:28.891 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:28 compute-0 systemd[1]: libpod-conmon-af318490f9df013ce255de67ce20dee4200d3f1338b2d70827155d9d15830b91.scope: Deactivated successfully.
Dec 13 04:32:29 compute-0 podman[282679]: 2025-12-13 04:32:29.128218568 +0000 UTC m=+0.046240188 container create b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:32:29 compute-0 systemd[1]: Started libpod-conmon-b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce.scope.
Dec 13 04:32:29 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921fb63cfa07f91ae8364eba75084e7c38572e4cc1b5b2805b90a14da50bedc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921fb63cfa07f91ae8364eba75084e7c38572e4cc1b5b2805b90a14da50bedc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921fb63cfa07f91ae8364eba75084e7c38572e4cc1b5b2805b90a14da50bedc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/921fb63cfa07f91ae8364eba75084e7c38572e4cc1b5b2805b90a14da50bedc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:29 compute-0 podman[282679]: 2025-12-13 04:32:29.111071001 +0000 UTC m=+0.029092671 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:29 compute-0 ceph-mon[75071]: pgmap v1938: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:29 compute-0 podman[282679]: 2025-12-13 04:32:29.214307958 +0000 UTC m=+0.132329588 container init b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:29 compute-0 podman[282679]: 2025-12-13 04:32:29.225949585 +0000 UTC m=+0.143971225 container start b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:29 compute-0 podman[282679]: 2025-12-13 04:32:29.22980041 +0000 UTC m=+0.147822040 container attach b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:32:29 compute-0 nova_compute[243704]: 2025-12-13 04:32:29.400 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:29 compute-0 jolly_williams[282696]: {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     "0": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "devices": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "/dev/loop3"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             ],
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_name": "ceph_lv0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_size": "21470642176",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "name": "ceph_lv0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "tags": {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_name": "ceph",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.crush_device_class": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.encrypted": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.objectstore": "bluestore",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_id": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.vdo": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.with_tpm": "0"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             },
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "vg_name": "ceph_vg0"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         }
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     ],
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     "1": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "devices": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "/dev/loop4"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             ],
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_name": "ceph_lv1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_size": "21470642176",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "name": "ceph_lv1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "tags": {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_name": "ceph",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.crush_device_class": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.encrypted": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.objectstore": "bluestore",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_id": "1",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.vdo": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.with_tpm": "0"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             },
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "vg_name": "ceph_vg1"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         }
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     ],
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     "2": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "devices": [
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "/dev/loop5"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             ],
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_name": "ceph_lv2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_size": "21470642176",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "name": "ceph_lv2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "tags": {
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.cluster_name": "ceph",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.crush_device_class": "",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.encrypted": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.objectstore": "bluestore",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osd_id": "2",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.vdo": "0",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:                 "ceph.with_tpm": "0"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             },
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "type": "block",
Dec 13 04:32:29 compute-0 jolly_williams[282696]:             "vg_name": "ceph_vg2"
Dec 13 04:32:29 compute-0 jolly_williams[282696]:         }
Dec 13 04:32:29 compute-0 jolly_williams[282696]:     ]
Dec 13 04:32:29 compute-0 jolly_williams[282696]: }
Dec 13 04:32:29 compute-0 systemd[1]: libpod-b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce.scope: Deactivated successfully.
Dec 13 04:32:29 compute-0 podman[282705]: 2025-12-13 04:32:29.602957965 +0000 UTC m=+0.023960503 container died b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:32:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-921fb63cfa07f91ae8364eba75084e7c38572e4cc1b5b2805b90a14da50bedc1-merged.mount: Deactivated successfully.
Dec 13 04:32:29 compute-0 podman[282705]: 2025-12-13 04:32:29.64138243 +0000 UTC m=+0.062384938 container remove b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jolly_williams, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Dec 13 04:32:29 compute-0 systemd[1]: libpod-conmon-b8605309359794260b46fda2c7105df5a0d38234798476fd577d58da72b402ce.scope: Deactivated successfully.
Dec 13 04:32:29 compute-0 sudo[282602]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:29 compute-0 sudo[282720]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:32:29 compute-0 sudo[282720]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:29 compute-0 sudo[282720]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:29 compute-0 sudo[282745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:32:29 compute-0 sudo[282745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:29 compute-0 nova_compute[243704]: 2025-12-13 04:32:29.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.161078769 +0000 UTC m=+0.045944669 container create 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:32:30 compute-0 systemd[1]: Started libpod-conmon-76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7.scope.
Dec 13 04:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.139494812 +0000 UTC m=+0.024360742 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.239846131 +0000 UTC m=+0.124712051 container init 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.247383026 +0000 UTC m=+0.132248916 container start 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:32:30 compute-0 sharp_blackburn[282798]: 167 167
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.251333773 +0000 UTC m=+0.136199713 container attach 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True)
Dec 13 04:32:30 compute-0 systemd[1]: libpod-76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7.scope: Deactivated successfully.
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.252442074 +0000 UTC m=+0.137307964 container died 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:32:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-980b1751b393092c9e9a5d660c557617737277f3b7f2033749426c73e1200d57-merged.mount: Deactivated successfully.
Dec 13 04:32:30 compute-0 podman[282782]: 2025-12-13 04:32:30.288945766 +0000 UTC m=+0.173811666 container remove 76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_blackburn, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:32:30 compute-0 systemd[1]: libpod-conmon-76c2b983871d3e1705adda768929ea0087317edf5c352e2537aa7438191a16a7.scope: Deactivated successfully.
Dec 13 04:32:30 compute-0 podman[282820]: 2025-12-13 04:32:30.475007155 +0000 UTC m=+0.053415234 container create aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:32:30 compute-0 systemd[1]: Started libpod-conmon-aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12.scope.
Dec 13 04:32:30 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba828b404402e45c710d821071de387760ad0a24444c88a9443db97e6d066e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba828b404402e45c710d821071de387760ad0a24444c88a9443db97e6d066e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba828b404402e45c710d821071de387760ad0a24444c88a9443db97e6d066e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ba828b404402e45c710d821071de387760ad0a24444c88a9443db97e6d066e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:30 compute-0 podman[282820]: 2025-12-13 04:32:30.45276606 +0000 UTC m=+0.031174219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:32:30 compute-0 podman[282820]: 2025-12-13 04:32:30.548659827 +0000 UTC m=+0.127067946 container init aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:32:30 compute-0 podman[282820]: 2025-12-13 04:32:30.56013997 +0000 UTC m=+0.138548059 container start aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec 13 04:32:30 compute-0 podman[282820]: 2025-12-13 04:32:30.56421278 +0000 UTC m=+0.142620989 container attach aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:30 compute-0 nova_compute[243704]: 2025-12-13 04:32:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:31 compute-0 ceph-mon[75071]: pgmap v1939: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:31 compute-0 lvm[282915]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:32:31 compute-0 lvm[282916]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:32:31 compute-0 lvm[282915]: VG ceph_vg0 finished
Dec 13 04:32:31 compute-0 lvm[282916]: VG ceph_vg1 finished
Dec 13 04:32:31 compute-0 lvm[282918]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:32:31 compute-0 lvm[282918]: VG ceph_vg2 finished
Dec 13 04:32:31 compute-0 lvm[282920]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:32:31 compute-0 lvm[282920]: VG ceph_vg2 finished
Dec 13 04:32:31 compute-0 youthful_kare[282837]: {}
Dec 13 04:32:31 compute-0 systemd[1]: libpod-aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12.scope: Deactivated successfully.
Dec 13 04:32:31 compute-0 systemd[1]: libpod-aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12.scope: Consumed 1.302s CPU time.
Dec 13 04:32:31 compute-0 podman[282820]: 2025-12-13 04:32:31.41492503 +0000 UTC m=+0.993333109 container died aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:32:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ba828b404402e45c710d821071de387760ad0a24444c88a9443db97e6d066e7-merged.mount: Deactivated successfully.
Dec 13 04:32:31 compute-0 podman[282820]: 2025-12-13 04:32:31.467377305 +0000 UTC m=+1.045785394 container remove aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kare, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Dec 13 04:32:31 compute-0 systemd[1]: libpod-conmon-aca27417e268c3c73fa2f96b8dde6ce5178ab1f12a32df241413df4651f43a12.scope: Deactivated successfully.
Dec 13 04:32:31 compute-0 sudo[282745]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:32:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:32:31 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:31 compute-0 sudo[282932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:32:31 compute-0 sudo[282932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:32:31 compute-0 sudo[282932]: pam_unix(sudo:session): session closed for user root
Dec 13 04:32:32 compute-0 nova_compute[243704]: 2025-12-13 04:32:32.093 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1940: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:32 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:32:32 compute-0 nova_compute[243704]: 2025-12-13 04:32:32.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:33 compute-0 ceph-mon[75071]: pgmap v1940: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1941: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:34 compute-0 nova_compute[243704]: 2025-12-13 04:32:34.402 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:34 compute-0 nova_compute[243704]: 2025-12-13 04:32:34.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:34 compute-0 nova_compute[243704]: 2025-12-13 04:32:34.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:34 compute-0 nova_compute[243704]: 2025-12-13 04:32:34.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:32:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:35.111 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:35.111 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:35.112 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:35 compute-0 ceph-mon[75071]: pgmap v1941: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:36 compute-0 podman[282957]: 2025-12-13 04:32:36.959902266 +0000 UTC m=+0.086336909 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:32:37 compute-0 nova_compute[243704]: 2025-12-13 04:32:37.095 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:37 compute-0 ceph-mon[75071]: pgmap v1942: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:37 compute-0 nova_compute[243704]: 2025-12-13 04:32:37.873 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:38 compute-0 nova_compute[243704]: 2025-12-13 04:32:38.256 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:39 compute-0 nova_compute[243704]: 2025-12-13 04:32:39.403 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:39 compute-0 ceph-mon[75071]: pgmap v1943: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:39 compute-0 podman[282974]: 2025-12-13 04:32:39.990205414 +0000 UTC m=+0.128110444 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:32:40
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'volumes', '.mgr', 'images', 'vms', 'cephfs.cephfs.data', 'default.rgw.log', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.meta']
Dec 13 04:32:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:32:41 compute-0 ceph-mon[75071]: pgmap v1944: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:42 compute-0 nova_compute[243704]: 2025-12-13 04:32:42.096 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1945: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:32:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:32:42 compute-0 nova_compute[243704]: 2025-12-13 04:32:42.878 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:42 compute-0 nova_compute[243704]: 2025-12-13 04:32:42.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Dec 13 04:32:42 compute-0 nova_compute[243704]: 2025-12-13 04:32:42.892 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:32:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:32:43 compute-0 ceph-mon[75071]: pgmap v1945: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:43 compute-0 nova_compute[243704]: 2025-12-13 04:32:43.902 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:32:43 compute-0 nova_compute[243704]: 2025-12-13 04:32:43.903 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Dec 13 04:32:43 compute-0 nova_compute[243704]: 2025-12-13 04:32:43.920 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Dec 13 04:32:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:44 compute-0 nova_compute[243704]: 2025-12-13 04:32:44.406 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:45 compute-0 ceph-mon[75071]: pgmap v1946: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:32:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1465205570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:32:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:32:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1465205570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:32:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1465205570' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:32:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1465205570' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:32:47 compute-0 nova_compute[243704]: 2025-12-13 04:32:47.098 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:47 compute-0 ceph-mon[75071]: pgmap v1947: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:49 compute-0 nova_compute[243704]: 2025-12-13 04:32:49.409 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:49 compute-0 ceph-mon[75071]: pgmap v1948: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:50 compute-0 nova_compute[243704]: 2025-12-13 04:32:50.941 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:50 compute-0 nova_compute[243704]: 2025-12-13 04:32:50.942 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:50 compute-0 nova_compute[243704]: 2025-12-13 04:32:50.962 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.064 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.064 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.073 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.073 243708 INFO nova.compute.claims [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Claim successful on node compute-0.ctlplane.example.com
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.355 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:51 compute-0 ceph-mon[75071]: pgmap v1949: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:51 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:32:51 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2447753152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.899 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.908 243708 DEBUG nova.compute.provider_tree [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.928 243708 DEBUG nova.scheduler.client.report [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.953 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.888s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:51 compute-0 nova_compute[243704]: 2025-12-13 04:32:51.954 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.007 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.008 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.025 243708 INFO nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.046 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.100 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.124 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.125 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.126 243708 INFO nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Creating image(s)
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.147 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.172 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.196 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.200 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.232 243708 DEBUG nova.policy [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8c473655a64a434b9a574fee057cb112', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '6229053e06554ebebd8cbafe5a6dbb81', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.286 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.288 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.289 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.290 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "5aac2f5376428a51fa3db16e8cbb1600bbf628a0" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.320 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.326 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.673 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5aac2f5376428a51fa3db16e8cbb1600bbf628a0 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.86981128241282e-06 of space, bias 1.0, pg target 0.001160943384723846 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029101375646405506 of space, bias 1.0, pg target 0.8730412693921652 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4658011110870526e-06 of space, bias 1.0, pg target 0.0007397403333261157 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006666999159128148 of space, bias 1.0, pg target 0.20000997477384444 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3098816406822317e-06 of space, bias 4.0, pg target 0.0015718579688186781 quantized to 16 (current 16)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:32:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.748 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] resizing rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.825 243708 DEBUG nova.objects.instance [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'migration_context' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.850 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.850 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Ensure instance console log exists: /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.851 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.851 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:52 compute-0 nova_compute[243704]: 2025-12-13 04:32:52.852 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2447753152' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:32:52 compute-0 ceph-mon[75071]: pgmap v1950: 305 pgs: 305 active+clean; 271 MiB data, 637 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:32:53 compute-0 nova_compute[243704]: 2025-12-13 04:32:53.155 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Successfully created port: ef748f46-f14c-4151-878f-146280febd4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Dec 13 04:32:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.075 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Successfully updated port: ef748f46-f14c-4151-878f-146280febd4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.095 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.096 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquired lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.096 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Dec 13 04:32:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 307 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.222 243708 DEBUG nova.compute.manager [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-changed-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.222 243708 DEBUG nova.compute.manager [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Refreshing instance network info cache due to event network-changed-ef748f46-f14c-4151-878f-146280febd4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.223 243708 DEBUG oslo_concurrency.lockutils [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.336 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Dec 13 04:32:54 compute-0 nova_compute[243704]: 2025-12-13 04:32:54.411 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:55 compute-0 ceph-mon[75071]: pgmap v1951: 305 pgs: 305 active+clean; 307 MiB data, 647 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 1.3 MiB/s wr, 15 op/s
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.341 243708 DEBUG nova.network.neutron [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating instance_info_cache with network_info: [{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.359 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Releasing lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.359 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Instance network_info: |[{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.360 243708 DEBUG oslo_concurrency.lockutils [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.360 243708 DEBUG nova.network.neutron [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Refreshing network info cache for port ef748f46-f14c-4151-878f-146280febd4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.363 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Start _get_guest_xml network_info=[{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encryption_secret_uuid': None, 'size': 0, 'boot_index': 0, 'device_name': '/dev/vda', 'guest_format': None, 'disk_bus': 'virtio', 'device_type': 'disk', 'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'image_id': '36cf6469-9e96-4186-bf30-37c785f25db6'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.368 243708 WARNING nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.376 243708 DEBUG nova.virt.libvirt.host [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.377 243708 DEBUG nova.virt.libvirt.host [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.384 243708 DEBUG nova.virt.libvirt.host [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.384 243708 DEBUG nova.virt.libvirt.host [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.385 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.385 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-13T04:07:13Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='50a38eaa-e311-48c7-b9ca-929042832f6b',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-12-13T04:07:15Z,direct_url=<?>,disk_format='qcow2',id=36cf6469-9e96-4186-bf30-37c785f25db6,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='c0249c5198a64ed7aadbed8add7c4bde',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-12-13T04:07:18Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.386 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.387 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.387 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.387 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.387 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.388 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.388 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.388 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.389 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.389 243708 DEBUG nova.virt.hardware [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.393 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:32:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774796050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:32:55 compute-0 podman[283204]: 2025-12-13 04:32:55.979269119 +0000 UTC m=+0.110981348 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:32:55 compute-0 nova_compute[243704]: 2025-12-13 04:32:55.987 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.594s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.012 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.016 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:32:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1774796050' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.479 243708 DEBUG nova.network.neutron [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updated VIF entry in instance network info cache for port ef748f46-f14c-4151-878f-146280febd4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.480 243708 DEBUG nova.network.neutron [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating instance_info_cache with network_info: [{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.499 243708 DEBUG oslo_concurrency.lockutils [req-a508d2ce-ad2f-4aef-a771-c9a248f3ec9c req-233c6129-7337-4920-b31a-e32b031b0e52 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:32:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:32:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/399862378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.589 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.573s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.591 243708 DEBUG nova.virt.libvirt.vif [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:32:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-758538397',display_name='tempest-SnapshotDataIntegrityTests-server-758538397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-758538397',id=30,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKvmiK6kmoAIL/+Yrs4aEMDG71viPy2FBqEZs8wU5VTGSSRBZ+Kvlm3x7ap9w+ejjteItxk+BAjtf+s3CecR0+wvBssolKT/KIgL22+FhDRrK4GgwbAWXAFIzWQFTqKdkw==',key_name='tempest-keypair-1348722630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6229053e06554ebebd8cbafe5a6dbb81',ramdisk_id='',reservation_id='r-9e3eng4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1217316932',owner_user_name='tempest-SnapshotDataIntegrityTests-1217316932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:32:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8c473655a64a434b9a574fee057cb112',uuid=0dd460c9-84b7-4ae0-a559-418f54258fe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.591 243708 DEBUG nova.network.os_vif_util [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converting VIF {"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.592 243708 DEBUG nova.network.os_vif_util [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.593 243708 DEBUG nova.objects.instance [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.616 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] End _get_guest_xml xml=<domain type="kvm">
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <uuid>0dd460c9-84b7-4ae0-a559-418f54258fe1</uuid>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <name>instance-0000001e</name>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <memory>131072</memory>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <vcpu>1</vcpu>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <metadata>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:name>tempest-SnapshotDataIntegrityTests-server-758538397</nova:name>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:creationTime>2025-12-13 04:32:55</nova:creationTime>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:flavor name="m1.nano">
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:memory>128</nova:memory>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:disk>1</nova:disk>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:swap>0</nova:swap>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:ephemeral>0</nova:ephemeral>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:vcpus>1</nova:vcpus>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </nova:flavor>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:owner>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:user uuid="8c473655a64a434b9a574fee057cb112">tempest-SnapshotDataIntegrityTests-1217316932-project-member</nova:user>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:project uuid="6229053e06554ebebd8cbafe5a6dbb81">tempest-SnapshotDataIntegrityTests-1217316932</nova:project>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </nova:owner>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:root type="image" uuid="36cf6469-9e96-4186-bf30-37c785f25db6"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <nova:ports>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <nova:port uuid="ef748f46-f14c-4151-878f-146280febd4e">
Dec 13 04:32:56 compute-0 nova_compute[243704]:           <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:         </nova:port>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </nova:ports>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </nova:instance>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </metadata>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <sysinfo type="smbios">
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <system>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="manufacturer">RDO</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="product">OpenStack Compute</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="serial">0dd460c9-84b7-4ae0-a559-418f54258fe1</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="uuid">0dd460c9-84b7-4ae0-a559-418f54258fe1</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <entry name="family">Virtual Machine</entry>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </system>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </sysinfo>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <os>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <type arch="x86_64" machine="q35">hvm</type>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <boot dev="hd"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <smbios mode="sysinfo"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </os>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <features>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <acpi/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <apic/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <vmcoreinfo/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </features>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <clock offset="utc">
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <timer name="pit" tickpolicy="delay"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <timer name="rtc" tickpolicy="catchup"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <timer name="hpet" present="no"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </clock>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <cpu mode="host-model" match="exact">
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <topology sockets="1" cores="1" threads="1"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </cpu>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   <devices>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <disk type="network" device="disk">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/0dd460c9-84b7-4ae0-a559-418f54258fe1_disk">
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </source>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <target dev="vda" bus="virtio"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <disk type="network" device="cdrom">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <driver type="raw" cache="none"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <source protocol="rbd" name="vms/0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config">
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <host name="192.168.122.100" port="6789"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </source>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <auth username="openstack">
Dec 13 04:32:56 compute-0 nova_compute[243704]:         <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       </auth>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <target dev="sda" bus="sata"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </disk>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <interface type="ethernet">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <mac address="fa:16:3e:83:10:ef"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <driver name="vhost" rx_queue_size="512"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <mtu size="1442"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <target dev="tapef748f46-f1"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </interface>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <serial type="pty">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <log file="/var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/console.log" append="off"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </serial>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <video>
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <model type="virtio"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </video>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <input type="tablet" bus="usb"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <rng model="virtio">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <backend model="random">/dev/urandom</backend>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </rng>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="pci" model="pcie-root-port"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <controller type="usb" index="0"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     <memballoon model="virtio">
Dec 13 04:32:56 compute-0 nova_compute[243704]:       <stats period="10"/>
Dec 13 04:32:56 compute-0 nova_compute[243704]:     </memballoon>
Dec 13 04:32:56 compute-0 nova_compute[243704]:   </devices>
Dec 13 04:32:56 compute-0 nova_compute[243704]: </domain>
Dec 13 04:32:56 compute-0 nova_compute[243704]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.617 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Preparing to wait for external event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.618 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.618 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.618 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.619 243708 DEBUG nova.virt.libvirt.vif [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-13T04:32:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-758538397',display_name='tempest-SnapshotDataIntegrityTests-server-758538397',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-758538397',id=30,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKvmiK6kmoAIL/+Yrs4aEMDG71viPy2FBqEZs8wU5VTGSSRBZ+Kvlm3x7ap9w+ejjteItxk+BAjtf+s3CecR0+wvBssolKT/KIgL22+FhDRrK4GgwbAWXAFIzWQFTqKdkw==',key_name='tempest-keypair-1348722630',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='6229053e06554ebebd8cbafe5a6dbb81',ramdisk_id='',reservation_id='r-9e3eng4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SnapshotDataIntegrityTests-1217316932',owner_user_name='tempest-SnapshotDataIntegrityTests-1217316932-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-13T04:32:52Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8c473655a64a434b9a574fee057cb112',uuid=0dd460c9-84b7-4ae0-a559-418f54258fe1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.619 243708 DEBUG nova.network.os_vif_util [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converting VIF {"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.620 243708 DEBUG nova.network.os_vif_util [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.621 243708 DEBUG os_vif [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.621 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.622 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.622 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.626 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.626 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapef748f46-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.627 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapef748f46-f1, col_values=(('external_ids', {'iface-id': 'ef748f46-f14c-4151-878f-146280febd4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:10:ef', 'vm-uuid': '0dd460c9-84b7-4ae0-a559-418f54258fe1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:56 compute-0 NetworkManager[48899]: <info>  [1765600376.6295] manager: (tapef748f46-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/148)
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.632 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.670 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.672 243708 INFO os_vif [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1')
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.731 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.731 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.731 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No VIF found with MAC fa:16:3e:83:10:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.731 243708 INFO nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Using config drive
Dec 13 04:32:56 compute-0 nova_compute[243704]: 2025-12-13 04:32:56.755 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.152 243708 INFO nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Creating config drive at /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.160 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dbkvyv5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:57 compute-0 ceph-mon[75071]: pgmap v1952: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:32:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/399862378' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.311 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1dbkvyv5" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.354 243708 DEBUG nova.storage.rbd_utils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] rbd image 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.359 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.578 243708 DEBUG oslo_concurrency.processutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config 0dd460c9-84b7-4ae0-a559-418f54258fe1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.218s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.579 243708 INFO nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Deleting local config drive /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1/disk.config because it was imported into RBD.
Dec 13 04:32:57 compute-0 kernel: tapef748f46-f1: entered promiscuous mode
Dec 13 04:32:57 compute-0 NetworkManager[48899]: <info>  [1765600377.6547] manager: (tapef748f46-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/149)
Dec 13 04:32:57 compute-0 ovn_controller[145204]: 2025-12-13T04:32:57Z|00276|binding|INFO|Claiming lport ef748f46-f14c-4151-878f-146280febd4e for this chassis.
Dec 13 04:32:57 compute-0 ovn_controller[145204]: 2025-12-13T04:32:57Z|00277|binding|INFO|ef748f46-f14c-4151-878f-146280febd4e: Claiming fa:16:3e:83:10:ef 10.100.0.14
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.656 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.662 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.666 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:57 compute-0 systemd-machined[206767]: New machine qemu-30-instance-0000001e.
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.689 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:10:ef 10.100.0.14'], port_security=['fa:16:3e:83:10:ef 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0dd460c9-84b7-4ae0-a559-418f54258fe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35b97038-818f-4818-aa78-03e50d5de529', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6229053e06554ebebd8cbafe5a6dbb81', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3a7583a8-ea86-4146-9e71-f4520807f9fb 438da6d3-6e05-4a13-8e69-07ef61fc8b32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=819009fe-c70b-4887-8d44-8031dbdcb5fc, chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=ef748f46-f14c-4151-878f-146280febd4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.690 154842 INFO neutron.agent.ovn.metadata.agent [-] Port ef748f46-f14c-4151-878f-146280febd4e in datapath 35b97038-818f-4818-aa78-03e50d5de529 bound to our chassis
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.692 154842 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 35b97038-818f-4818-aa78-03e50d5de529
Dec 13 04:32:57 compute-0 systemd[1]: Started Virtual Machine qemu-30-instance-0000001e.
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.710 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[872c1b9b-0d79-4083-9f6d-38ebc7635315]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.711 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap35b97038-81 in ovnmeta-35b97038-818f-4818-aa78-03e50d5de529 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.713 249645 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap35b97038-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.713 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9510f99b-3f0c-4c7b-9e18-16f357ff8d6a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.714 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[f0dc4ff7-326c-4ae4-acd4-edf06cc7a1f4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.726 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[d61b35df-e086-4d57-bfbc-22c9f3f977f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_controller[145204]: 2025-12-13T04:32:57Z|00278|binding|INFO|Setting lport ef748f46-f14c-4151-878f-146280febd4e ovn-installed in OVS
Dec 13 04:32:57 compute-0 ovn_controller[145204]: 2025-12-13T04:32:57Z|00279|binding|INFO|Setting lport ef748f46-f14c-4151-878f-146280febd4e up in Southbound
Dec 13 04:32:57 compute-0 nova_compute[243704]: 2025-12-13 04:32:57.762 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:57 compute-0 systemd-udevd[283354]: Network interface NamePolicy= disabled on kernel command line.
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.774 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[380a89f2-89b9-416e-8eb0-df37499ba2f1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 NetworkManager[48899]: <info>  [1765600377.7783] device (tapef748f46-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec 13 04:32:57 compute-0 NetworkManager[48899]: <info>  [1765600377.7790] device (tapef748f46-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.798 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b87583-1b29-45ac-b10c-07259e5088d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 NetworkManager[48899]: <info>  [1765600377.8048] manager: (tap35b97038-80): new Veth device (/org/freedesktop/NetworkManager/Devices/150)
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.804 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[b07fc2f6-3010-44b5-9230-23c420329e9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.838 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[dce7d26f-5281-4c7c-8681-503c88ed64c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.841 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[82df5740-26df-470c-85c6-d2bc1a0bed68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 NetworkManager[48899]: <info>  [1765600377.8672] device (tap35b97038-80): carrier: link connected
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.874 249665 DEBUG oslo.privsep.daemon [-] privsep: reply[4a4edcf7-91de-415c-af8b-bd4ed64f2879]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.888 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[203cff3c-3e86-4271-8e2b-7654791cbd7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35b97038-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:93:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511654, 'reachable_time': 24014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283380, 'error': None, 'target': 'ovnmeta-35b97038-818f-4818-aa78-03e50d5de529', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.902 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf3ce49-4030-4da0-aea6-89c534f9b215]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed2:9391'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 511654, 'tstamp': 511654}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283381, 'error': None, 'target': 'ovnmeta-35b97038-818f-4818-aa78-03e50d5de529', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.917 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9e59792d-ec35-4f62-a247-ea6b9da91ee3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap35b97038-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d2:93:91'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 94], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511654, 'reachable_time': 24014, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 283382, 'error': None, 'target': 'ovnmeta-35b97038-818f-4818-aa78-03e50d5de529', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.948 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[46ab9c60-a041-4598-aacf-6a74794c6484]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.996 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[100488d2-bb32-4caf-8bc6-2c1a1351c76d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.997 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35b97038-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.998 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Dec 13 04:32:57 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:57.998 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap35b97038-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:58 compute-0 kernel: tap35b97038-80: entered promiscuous mode
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.000 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:58 compute-0 NetworkManager[48899]: <info>  [1765600378.0016] manager: (tap35b97038-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.004 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap35b97038-80, col_values=(('external_ids', {'iface-id': '4d5aed46-6fa0-476f-b66f-90e19b93c80f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.005 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:58 compute-0 ovn_controller[145204]: 2025-12-13T04:32:58Z|00280|binding|INFO|Releasing lport 4d5aed46-6fa0-476f-b66f-90e19b93c80f from this chassis (sb_readonly=0)
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.006 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.007 154842 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/35b97038-818f-4818-aa78-03e50d5de529.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/35b97038-818f-4818-aa78-03e50d5de529.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.008 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d29147d7-6491-4a16-af30-82d2e171a2d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.008 154842 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: global
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     log         /dev/log local0 debug
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     log-tag     haproxy-metadata-proxy-35b97038-818f-4818-aa78-03e50d5de529
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     user        root
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     group       root
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     maxconn     1024
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     pidfile     /var/lib/neutron/external/pids/35b97038-818f-4818-aa78-03e50d5de529.pid.haproxy
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     daemon
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: defaults
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     log global
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     mode http
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     option httplog
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     option dontlognull
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     option http-server-close
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     option forwardfor
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     retries                 3
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     timeout http-request    30s
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     timeout connect         30s
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     timeout client          32s
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     timeout server          32s
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     timeout http-keep-alive 30s
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: listen listener
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     bind 169.254.169.254:80
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     server metadata /var/lib/neutron/metadata_proxy
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:     http-request add-header X-OVN-Network-ID 35b97038-818f-4818-aa78-03e50d5de529
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.009 154842 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-35b97038-818f-4818-aa78-03e50d5de529', 'env', 'PROCESS_TAG=haproxy-35b97038-818f-4818-aa78-03e50d5de529', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/35b97038-818f-4818-aa78-03e50d5de529.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.018 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.098 243708 DEBUG nova.compute.manager [req-54675844-9069-491a-947f-012deeb1f2b1 req-4a55392b-483d-4852-88c1-73d5b90439b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.098 243708 DEBUG oslo_concurrency.lockutils [req-54675844-9069-491a-947f-012deeb1f2b1 req-4a55392b-483d-4852-88c1-73d5b90439b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.099 243708 DEBUG oslo_concurrency.lockutils [req-54675844-9069-491a-947f-012deeb1f2b1 req-4a55392b-483d-4852-88c1-73d5b90439b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.099 243708 DEBUG oslo_concurrency.lockutils [req-54675844-9069-491a-947f-012deeb1f2b1 req-4a55392b-483d-4852-88c1-73d5b90439b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.099 243708 DEBUG nova.compute.manager [req-54675844-9069-491a-947f-012deeb1f2b1 req-4a55392b-483d-4852-88c1-73d5b90439b2 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Processing event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Dec 13 04:32:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:32:58 compute-0 podman[283415]: 2025-12-13 04:32:58.428964452 +0000 UTC m=+0.100232466 container create 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec 13 04:32:58 compute-0 podman[283415]: 2025-12-13 04:32:58.351695151 +0000 UTC m=+0.022963205 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.481 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.483 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:32:58 compute-0 systemd[1]: Started libpod-conmon-1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d.scope.
Dec 13 04:32:58 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:32:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/616e003ab7ed0d3f691481dfa53bdd84feccd2a0241f0d6385054908e6a31a3f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec 13 04:32:58 compute-0 podman[283415]: 2025-12-13 04:32:58.578703033 +0000 UTC m=+0.249971097 container init 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:32:58 compute-0 podman[283415]: 2025-12-13 04:32:58.585474848 +0000 UTC m=+0.256742882 container start 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec 13 04:32:58 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [NOTICE]   (283434) : New worker (283452) forked
Dec 13 04:32:58 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [NOTICE]   (283434) : Loading success.
Dec 13 04:32:58 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:32:58.694 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:32:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.797 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600378.79648, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.797 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] VM Started (Lifecycle Event)
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.799 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.803 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.809 243708 INFO nova.virt.libvirt.driver [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Instance spawned successfully.
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.810 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.829 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.841 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.848 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.849 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.850 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.851 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.852 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.853 243708 DEBUG nova.virt.libvirt.driver [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.896 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.897 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600378.7967987, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.898 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] VM Paused (Lifecycle Event)
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.925 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.932 243708 DEBUG nova.virt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Emitting event <LifecycleEvent: 1765600378.8030958, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.933 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] VM Resumed (Lifecycle Event)
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.945 243708 INFO nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Took 6.82 seconds to spawn the instance on the hypervisor.
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.946 243708 DEBUG nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.959 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:32:58 compute-0 nova_compute[243704]: 2025-12-13 04:32:58.964 243708 DEBUG nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Dec 13 04:32:59 compute-0 nova_compute[243704]: 2025-12-13 04:32:59.009 243708 INFO nova.compute.manager [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] During sync_power_state the instance has a pending task (spawning). Skip.
Dec 13 04:32:59 compute-0 nova_compute[243704]: 2025-12-13 04:32:59.065 243708 INFO nova.compute.manager [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Took 8.03 seconds to build instance.
Dec 13 04:32:59 compute-0 nova_compute[243704]: 2025-12-13 04:32:59.084 243708 DEBUG oslo_concurrency.lockutils [None req-f0396c0f-d51f-495d-a65e-fc9ff66f6c39 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:32:59 compute-0 ceph-mon[75071]: pgmap v1953: 305 pgs: 305 active+clean; 317 MiB data, 653 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Dec 13 04:32:59 compute-0 nova_compute[243704]: 2025-12-13 04:32:59.413 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.235 243708 DEBUG nova.compute.manager [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.237 243708 DEBUG oslo_concurrency.lockutils [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.238 243708 DEBUG oslo_concurrency.lockutils [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.238 243708 DEBUG oslo_concurrency.lockutils [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.239 243708 DEBUG nova.compute.manager [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] No waiting events found dispatching network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:33:00 compute-0 nova_compute[243704]: 2025-12-13 04:33:00.239 243708 WARNING nova.compute.manager [req-c9469647-b67c-44b5-999a-03fa13efc04c req-723bd88f-6be4-4cf5-90b4-66e9fbd4710a 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received unexpected event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e for instance with vm_state active and task_state None.
Dec 13 04:33:01 compute-0 nova_compute[243704]: 2025-12-13 04:33:01.120 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:01 compute-0 NetworkManager[48899]: <info>  [1765600381.1220] manager: (patch-br-int-to-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/152)
Dec 13 04:33:01 compute-0 NetworkManager[48899]: <info>  [1765600381.1237] manager: (patch-provnet-c3f46e3e-8f36-48a8-978d-c63fb23c1f23-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/153)
Dec 13 04:33:01 compute-0 nova_compute[243704]: 2025-12-13 04:33:01.198 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:01 compute-0 ovn_controller[145204]: 2025-12-13T04:33:01Z|00281|binding|INFO|Releasing lport 4d5aed46-6fa0-476f-b66f-90e19b93c80f from this chassis (sb_readonly=0)
Dec 13 04:33:01 compute-0 nova_compute[243704]: 2025-12-13 04:33:01.206 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:01 compute-0 ceph-mon[75071]: pgmap v1954: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 13 04:33:01 compute-0 nova_compute[243704]: 2025-12-13 04:33:01.629 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 13 04:33:02 compute-0 nova_compute[243704]: 2025-12-13 04:33:02.320 243708 DEBUG nova.compute.manager [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-changed-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:33:02 compute-0 nova_compute[243704]: 2025-12-13 04:33:02.321 243708 DEBUG nova.compute.manager [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Refreshing instance network info cache due to event network-changed-ef748f46-f14c-4151-878f-146280febd4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Dec 13 04:33:02 compute-0 nova_compute[243704]: 2025-12-13 04:33:02.321 243708 DEBUG oslo_concurrency.lockutils [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:33:02 compute-0 nova_compute[243704]: 2025-12-13 04:33:02.321 243708 DEBUG oslo_concurrency.lockutils [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquired lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:33:02 compute-0 nova_compute[243704]: 2025-12-13 04:33:02.322 243708 DEBUG nova.network.neutron [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Refreshing network info cache for port ef748f46-f14c-4151-878f-146280febd4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Dec 13 04:33:03 compute-0 ceph-mon[75071]: pgmap v1955: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Dec 13 04:33:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1956: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 13 04:33:04 compute-0 nova_compute[243704]: 2025-12-13 04:33:04.415 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:04 compute-0 nova_compute[243704]: 2025-12-13 04:33:04.525 243708 DEBUG nova.network.neutron [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updated VIF entry in instance network info cache for port ef748f46-f14c-4151-878f-146280febd4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Dec 13 04:33:04 compute-0 nova_compute[243704]: 2025-12-13 04:33:04.526 243708 DEBUG nova.network.neutron [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating instance_info_cache with network_info: [{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:33:04 compute-0 nova_compute[243704]: 2025-12-13 04:33:04.552 243708 DEBUG oslo_concurrency.lockutils [req-be50bcdd-0cdb-445c-8fbc-d8ccb0f1a087 req-65d48d91-2888-4aa4-90fa-58cab361050d 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Releasing lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:33:05 compute-0 ceph-mon[75071]: pgmap v1956: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 83 op/s
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.435556) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385435732, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 2050, "num_deletes": 251, "total_data_size": 3457131, "memory_usage": 3506000, "flush_reason": "Manual Compaction"}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385464278, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 3383404, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36955, "largest_seqno": 39004, "table_properties": {"data_size": 3374105, "index_size": 5857, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18819, "raw_average_key_size": 20, "raw_value_size": 3355665, "raw_average_value_size": 3588, "num_data_blocks": 259, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765600161, "oldest_key_time": 1765600161, "file_creation_time": 1765600385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 28987 microseconds, and 15301 cpu microseconds.
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.464536) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 3383404 bytes OK
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.464637) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.466893) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.466918) EVENT_LOG_v1 {"time_micros": 1765600385466910, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.466943) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 3448554, prev total WAL file size 3448554, number of live WAL files 2.
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.469198) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(3304KB)], [77(10MB)]
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385469266, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 14424111, "oldest_snapshot_seqno": -1}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 7201 keys, 12580153 bytes, temperature: kUnknown
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385558005, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12580153, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12524974, "index_size": 36059, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18053, "raw_key_size": 181228, "raw_average_key_size": 25, "raw_value_size": 12388886, "raw_average_value_size": 1720, "num_data_blocks": 1437, "num_entries": 7201, "num_filter_entries": 7201, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765600385, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.558332) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12580153 bytes
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.560400) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 162.3 rd, 141.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 10.5 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 7715, records dropped: 514 output_compression: NoCompression
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.560422) EVENT_LOG_v1 {"time_micros": 1765600385560411, "job": 44, "event": "compaction_finished", "compaction_time_micros": 88884, "compaction_time_cpu_micros": 40768, "output_level": 6, "num_output_files": 1, "total_output_size": 12580153, "num_input_records": 7715, "num_output_records": 7201, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385561222, "job": 44, "event": "table_file_deletion", "file_number": 79}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600385563382, "job": 44, "event": "table_file_deletion", "file_number": 77}
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.468985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.563423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.563427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.563429) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.563431) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:33:05.563432) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:33:05 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:33:05.698 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:33:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 530 KiB/s wr, 85 op/s
Dec 13 04:33:06 compute-0 nova_compute[243704]: 2025-12-13 04:33:06.635 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:07 compute-0 ceph-mon[75071]: pgmap v1957: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 530 KiB/s wr, 85 op/s
Dec 13 04:33:07 compute-0 podman[283488]: 2025-12-13 04:33:07.922174535 +0000 UTC m=+0.056604970 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true)
Dec 13 04:33:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 04:33:08 compute-0 nova_compute[243704]: 2025-12-13 04:33:08.260 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:08 compute-0 nova_compute[243704]: 2025-12-13 04:33:08.280 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Triggering sync for uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Dec 13 04:33:08 compute-0 nova_compute[243704]: 2025-12-13 04:33:08.280 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:08 compute-0 nova_compute[243704]: 2025-12-13 04:33:08.280 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:08 compute-0 nova_compute[243704]: 2025-12-13 04:33:08.296 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.015s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:09 compute-0 nova_compute[243704]: 2025-12-13 04:33:09.418 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:09 compute-0 ceph-mon[75071]: pgmap v1958: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 04:33:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 04:33:10 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 04:33:10 compute-0 podman[283509]: 2025-12-13 04:33:10.951578319 +0000 UTC m=+0.091718225 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Dec 13 04:33:10 compute-0 ovn_controller[145204]: 2025-12-13T04:33:10Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:10:ef 10.100.0.14
Dec 13 04:33:10 compute-0 ovn_controller[145204]: 2025-12-13T04:33:10Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:10:ef 10.100.0.14
Dec 13 04:33:11 compute-0 ceph-mon[75071]: pgmap v1959: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 73 op/s
Dec 13 04:33:11 compute-0 nova_compute[243704]: 2025-12-13 04:33:11.658 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:13 compute-0 ceph-mon[75071]: pgmap v1960: 305 pgs: 305 active+clean; 317 MiB data, 654 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Dec 13 04:33:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 339 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Dec 13 04:33:14 compute-0 nova_compute[243704]: 2025-12-13 04:33:14.419 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:33:15 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Cumulative writes: 8522 writes, 38K keys, 8522 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s
                                           Cumulative WAL: 8522 writes, 8522 syncs, 1.00 writes per sync, written: 0.05 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1633 writes, 7586 keys, 1633 commit groups, 1.0 writes per commit group, ingest: 10.35 MB, 0.02 MB/s
                                           Interval WAL: 1633 writes, 1633 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     56.7      0.85              0.16        22    0.039       0      0       0.0       0.0
                                             L6      1/0   12.00 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   3.9     81.6     68.4      2.75              0.66        21    0.131    118K    12K       0.0       0.0
                                            Sum      1/0   12.00 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   4.9     62.3     65.6      3.60              0.82        43    0.084    118K    12K       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   6.0     44.3     45.9      1.74              0.28        12    0.145     44K   3658       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0     81.6     68.4      2.75              0.66        21    0.131    118K    12K       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     60.6      0.79              0.16        21    0.038       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.0      0.06              0.00         1    0.056       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 3000.2 total, 600.0 interval
                                           Flush(GB): cumulative 0.047, interval 0.013
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.23 GB write, 0.08 MB/s write, 0.22 GB read, 0.07 MB/s read, 3.6 seconds
                                           Interval compaction: 0.08 GB write, 0.13 MB/s write, 0.08 GB read, 0.13 MB/s read, 1.7 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x556f7ce578d0#2 capacity: 304.00 MB usage: 25.88 MB table_size: 0 occupancy: 18446744073709551615 collections: 6 last_copies: 0 last_secs: 0.000231 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1691,24.89 MB,8.18591%) FilterBlock(44,345.17 KB,0.110882%) IndexBlock(44,673.77 KB,0.216439%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Dec 13 04:33:15 compute-0 ceph-mon[75071]: pgmap v1961: 305 pgs: 305 active+clean; 339 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 120 op/s
Dec 13 04:33:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1962: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 13 04:33:16 compute-0 nova_compute[243704]: 2025-12-13 04:33:16.697 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:17 compute-0 ceph-mon[75071]: pgmap v1962: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 840 KiB/s rd, 2.1 MiB/s wr, 79 op/s
Dec 13 04:33:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:33:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:19 compute-0 ceph-mon[75071]: pgmap v1963: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Dec 13 04:33:19 compute-0 nova_compute[243704]: 2025-12-13 04:33:19.422 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:33:20 compute-0 nova_compute[243704]: 2025-12-13 04:33:20.680 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:20 compute-0 nova_compute[243704]: 2025-12-13 04:33:20.681 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:20 compute-0 nova_compute[243704]: 2025-12-13 04:33:20.698 243708 DEBUG nova.objects.instance [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:20 compute-0 nova_compute[243704]: 2025-12-13 04:33:20.746 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.065s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.054 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.055 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.055 243708 INFO nova.compute.manager [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attaching volume db91057d-3fcf-42af-bb21-d6cd1d4744fd to /dev/vdb
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.197 243708 DEBUG os_brick.utils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.198 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.216 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.018s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.216 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7515c6-822e-4a20-90e1-4c5a9131eead]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.218 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.226 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.226 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[fed1dc54-8b2c-4068-9f23-8adba682a01b]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.228 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.238 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.238 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[5ce6f30b-d44b-4100-b9e9-ed26a8a1bd75]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.239 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[e2eca0c4-e575-4725-8209-08b6d4ca752d]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.240 243708 DEBUG oslo_concurrency.processutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.265 243708 DEBUG oslo_concurrency.processutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "nvme version" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.268 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.268 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.269 243708 DEBUG os_brick.initiator.connectors.lightos [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.269 243708 DEBUG os_brick.utils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] <== get_connector_properties: return (71ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.270 243708 DEBUG nova.virt.block_device [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating existing volume attachment record: c08f74e7-1460-4ae6-b32b-59fffaa6c60c _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:33:21 compute-0 ceph-mon[75071]: pgmap v1964: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:33:21 compute-0 nova_compute[243704]: 2025-12-13 04:33:21.700 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:22 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:33:22 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/818049644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.209 243708 DEBUG nova.objects.instance [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.245 243708 DEBUG nova.virt.libvirt.driver [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to attach volume db91057d-3fcf-42af-bb21-d6cd1d4744fd with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.248 243708 DEBUG nova.virt.libvirt.guest [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:33:22 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-db91057d-3fcf-42af-bb21-d6cd1d4744fd">
Dec 13 04:33:22 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:33:22 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:22 compute-0 nova_compute[243704]:   <serial>db91057d-3fcf-42af-bb21-d6cd1d4744fd</serial>
Dec 13 04:33:22 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:22 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:33:22 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/818049644' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.443 243708 DEBUG nova.virt.libvirt.driver [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.443 243708 DEBUG nova.virt.libvirt.driver [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.444 243708 DEBUG nova.virt.libvirt.driver [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.444 243708 DEBUG nova.virt.libvirt.driver [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No VIF found with MAC fa:16:3e:83:10:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:33:22 compute-0 nova_compute[243704]: 2025-12-13 04:33:22.661 243708 DEBUG oslo_concurrency.lockutils [None req-b64517ac-6124-4cd1-8461-ee225a267872 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.606s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:23 compute-0 ceph-mon[75071]: pgmap v1965: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Dec 13 04:33:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.424 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.900 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.901 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.901 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:33:24 compute-0 nova_compute[243704]: 2025-12-13 04:33:24.902 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:24 compute-0 ceph-mon[75071]: pgmap v1966: 305 pgs: 305 active+clean; 350 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 308 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Dec 13 04:33:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:33:25 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2335561619' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.428 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.497 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.498 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.498 243708 DEBUG nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] skipping disk for instance-0000001e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.699 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.700 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4061MB free_disk=59.942466551437974GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.700 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.701 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.757 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.758 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.758 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:33:25 compute-0 nova_compute[243704]: 2025-12-13 04:33:25.878 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e485 do_prune osdmap full prune enabled
Dec 13 04:33:25 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e486 e486: 3 total, 3 up, 3 in
Dec 13 04:33:25 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e486: 3 total, 3 up, 3 in
Dec 13 04:33:26 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2335561619' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:26 compute-0 ceph-mon[75071]: osdmap e486: 3 total, 3 up, 3 in
Dec 13 04:33:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 351 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:33:26 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:33:26 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139427382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.462 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.469 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.484 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.500 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.501 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.800s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:26 compute-0 nova_compute[243704]: 2025-12-13 04:33:26.702 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:26 compute-0 podman[283602]: 2025-12-13 04:33:26.981534485 +0000 UTC m=+0.112363956 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2)
Dec 13 04:33:27 compute-0 ceph-mon[75071]: pgmap v1968: 305 pgs: 305 active+clean; 351 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 22 KiB/s wr, 11 op/s
Dec 13 04:33:27 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4139427382' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:33:27 compute-0 nova_compute[243704]: 2025-12-13 04:33:27.495 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e486 do_prune osdmap full prune enabled
Dec 13 04:33:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e487 e487: 3 total, 3 up, 3 in
Dec 13 04:33:28 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e487: 3 total, 3 up, 3 in
Dec 13 04:33:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 351 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 8.4 KiB/s wr, 12 op/s
Dec 13 04:33:28 compute-0 nova_compute[243704]: 2025-12-13 04:33:28.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:28 compute-0 nova_compute[243704]: 2025-12-13 04:33:28.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:33:28 compute-0 nova_compute[243704]: 2025-12-13 04:33:28.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:33:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e487 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:29 compute-0 ceph-mon[75071]: osdmap e487: 3 total, 3 up, 3 in
Dec 13 04:33:29 compute-0 ceph-mon[75071]: pgmap v1970: 305 pgs: 305 active+clean; 351 MiB data, 696 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 8.4 KiB/s wr, 12 op/s
Dec 13 04:33:29 compute-0 nova_compute[243704]: 2025-12-13 04:33:29.427 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 352 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 129 KiB/s wr, 52 op/s
Dec 13 04:33:30 compute-0 nova_compute[243704]: 2025-12-13 04:33:30.287 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Dec 13 04:33:30 compute-0 nova_compute[243704]: 2025-12-13 04:33:30.287 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquired lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Dec 13 04:33:30 compute-0 nova_compute[243704]: 2025-12-13 04:33:30.288 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Dec 13 04:33:30 compute-0 nova_compute[243704]: 2025-12-13 04:33:30.288 243708 DEBUG nova.objects.instance [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:31 compute-0 ovn_controller[145204]: 2025-12-13T04:33:31Z|00282|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec 13 04:33:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e487 do_prune osdmap full prune enabled
Dec 13 04:33:31 compute-0 ceph-mon[75071]: pgmap v1971: 305 pgs: 305 active+clean; 352 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 129 KiB/s wr, 52 op/s
Dec 13 04:33:31 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 e488: 3 total, 3 up, 3 in
Dec 13 04:33:31 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e488: 3 total, 3 up, 3 in
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.592 243708 DEBUG nova.network.neutron [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating instance_info_cache with network_info: [{"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.606 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Releasing lock "refresh_cache-0dd460c9-84b7-4ae0-a559-418f54258fe1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.606 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.607 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.607 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.608 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:31 compute-0 sudo[283628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:33:31 compute-0 sudo[283628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:31 compute-0 sudo[283628]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:31 compute-0 nova_compute[243704]: 2025-12-13 04:33:31.707 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:31 compute-0 sudo[283653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:33:31 compute-0 sudo[283653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.102 243708 DEBUG oslo_concurrency.lockutils [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.104 243708 DEBUG oslo_concurrency.lockutils [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.117 243708 INFO nova.compute.manager [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Detaching volume db91057d-3fcf-42af-bb21-d6cd1d4744fd
Dec 13 04:33:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 352 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 156 KiB/s wr, 52 op/s
Dec 13 04:33:32 compute-0 ceph-mon[75071]: osdmap e488: 3 total, 3 up, 3 in
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.307 243708 INFO nova.virt.block_device [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to driver detach volume db91057d-3fcf-42af-bb21-d6cd1d4744fd from mountpoint /dev/vdb
Dec 13 04:33:32 compute-0 sudo[283653]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.320 243708 DEBUG nova.virt.libvirt.driver [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Attempting to detach device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.321 243708 DEBUG nova.virt.libvirt.guest [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-db91057d-3fcf-42af-bb21-d6cd1d4744fd">
Dec 13 04:33:32 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <serial>db91057d-3fcf-42af-bb21-d6cd1d4744fd</serial>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:32 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.332 243708 INFO nova.virt.libvirt.driver [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config.
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.333 243708 DEBUG nova.virt.libvirt.driver [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.334 243708 DEBUG nova.virt.libvirt.guest [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-db91057d-3fcf-42af-bb21-d6cd1d4744fd">
Dec 13 04:33:32 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <serial>db91057d-3fcf-42af-bb21-d6cd1d4744fd</serial>
Dec 13 04:33:32 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:32 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:32 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:33:32 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:33:32 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:33:32 compute-0 sudo[283708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:33:32 compute-0 sudo[283708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:32 compute-0 sudo[283708]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.458 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765600412.4574802, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.459 243708 DEBUG nova.virt.libvirt.driver [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.462 243708 INFO nova.virt.libvirt.driver [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config.
Dec 13 04:33:32 compute-0 sudo[283733]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:33:32 compute-0 sudo[283733]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.691 243708 DEBUG nova.objects.instance [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:32 compute-0 nova_compute[243704]: 2025-12-13 04:33:32.740 243708 DEBUG oslo_concurrency.lockutils [None req-c80b1292-3143-40fd-9ede-6a05dbe8eef5 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.791645562 +0000 UTC m=+0.042253810 container create f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:33:32 compute-0 systemd[1]: Started libpod-conmon-f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b.scope.
Dec 13 04:33:32 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.771871504 +0000 UTC m=+0.022479802 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.874028121 +0000 UTC m=+0.124636369 container init f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.88133221 +0000 UTC m=+0.131940448 container start f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.88610234 +0000 UTC m=+0.136710588 container attach f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:33:32 compute-0 keen_albattani[283789]: 167 167
Dec 13 04:33:32 compute-0 systemd[1]: libpod-f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b.scope: Deactivated successfully.
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.888488145 +0000 UTC m=+0.139096403 container died f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Dec 13 04:33:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-6138b056bdf071979bc903832e5d60da9ad24341278818977d6b16495c75aa54-merged.mount: Deactivated successfully.
Dec 13 04:33:32 compute-0 podman[283773]: 2025-12-13 04:33:32.930141768 +0000 UTC m=+0.180750006 container remove f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_albattani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:33:32 compute-0 systemd[1]: libpod-conmon-f7b2a74b61f78ad83103b3af6dc036cb49f61c63edf248bc8c36fc1dc074fe4b.scope: Deactivated successfully.
Dec 13 04:33:33 compute-0 podman[283811]: 2025-12-13 04:33:33.117567412 +0000 UTC m=+0.049361812 container create 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:33:33 compute-0 systemd[1]: Started libpod-conmon-2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4.scope.
Dec 13 04:33:33 compute-0 podman[283811]: 2025-12-13 04:33:33.096731986 +0000 UTC m=+0.028526406 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:33 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:33 compute-0 podman[283811]: 2025-12-13 04:33:33.214524239 +0000 UTC m=+0.146318639 container init 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Dec 13 04:33:33 compute-0 ceph-mon[75071]: pgmap v1973: 305 pgs: 305 active+clean; 352 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 156 KiB/s wr, 52 op/s
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:33:33 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:33:33 compute-0 podman[283811]: 2025-12-13 04:33:33.221462288 +0000 UTC m=+0.153256688 container start 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:33:33 compute-0 podman[283811]: 2025-12-13 04:33:33.225721133 +0000 UTC m=+0.157515533 container attach 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:33:33 compute-0 dazzling_solomon[283827]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:33:33 compute-0 dazzling_solomon[283827]: --> All data devices are unavailable
Dec 13 04:33:33 compute-0 systemd[1]: libpod-2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4.scope: Deactivated successfully.
Dec 13 04:33:33 compute-0 podman[283847]: 2025-12-13 04:33:33.762683413 +0000 UTC m=+0.026029710 container died 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:33:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fbeff1f83fde0da2ffdbddc0516b2b4e843a50eed86f75d52bab2ba702f62b6-merged.mount: Deactivated successfully.
Dec 13 04:33:33 compute-0 podman[283847]: 2025-12-13 04:33:33.809521305 +0000 UTC m=+0.072867572 container remove 2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=dazzling_solomon, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:33:33 compute-0 systemd[1]: libpod-conmon-2802e81b946700e584794eb55fabe6af852cf752e6135debf3cb46968e21ace4.scope: Deactivated successfully.
Dec 13 04:33:33 compute-0 nova_compute[243704]: 2025-12-13 04:33:33.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:33 compute-0 sudo[283733]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:33 compute-0 sudo[283862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:33:33 compute-0 sudo[283862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:33 compute-0 sudo[283862]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:34 compute-0 sudo[283887]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:33:34 compute-0 sudo[283887]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 352 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 139 KiB/s wr, 44 op/s
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.322607886 +0000 UTC m=+0.041264763 container create cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:33:34 compute-0 systemd[1]: Started libpod-conmon-cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2.scope.
Dec 13 04:33:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.304325909 +0000 UTC m=+0.022982806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.407972627 +0000 UTC m=+0.126629554 container init cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.418903174 +0000 UTC m=+0.137560051 container start cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.422488591 +0000 UTC m=+0.141145478 container attach cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Dec 13 04:33:34 compute-0 reverent_ptolemy[283942]: 167 167
Dec 13 04:33:34 compute-0 systemd[1]: libpod-cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2.scope: Deactivated successfully.
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.424914927 +0000 UTC m=+0.143571814 container died cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec 13 04:33:34 compute-0 nova_compute[243704]: 2025-12-13 04:33:34.429 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8a247c88f319a418d66173d292df70605cf6e17ea052e395080809fb1f93b2a-merged.mount: Deactivated successfully.
Dec 13 04:33:34 compute-0 podman[283926]: 2025-12-13 04:33:34.464866653 +0000 UTC m=+0.183523550 container remove cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Dec 13 04:33:34 compute-0 systemd[1]: libpod-conmon-cd2186960a0ec8a5fa6bbf57fc615e0cff93f150fa89c2b1e4b58d416c2322e2.scope: Deactivated successfully.
Dec 13 04:33:34 compute-0 podman[283965]: 2025-12-13 04:33:34.661265673 +0000 UTC m=+0.055557381 container create 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Dec 13 04:33:34 compute-0 systemd[1]: Started libpod-conmon-521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792.scope.
Dec 13 04:33:34 compute-0 podman[283965]: 2025-12-13 04:33:34.631272048 +0000 UTC m=+0.025563806 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:34 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180ea01e9d3feced879739bcb658ab15fbe83840942df92a7028134f9fbc766/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180ea01e9d3feced879739bcb658ab15fbe83840942df92a7028134f9fbc766/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180ea01e9d3feced879739bcb658ab15fbe83840942df92a7028134f9fbc766/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c180ea01e9d3feced879739bcb658ab15fbe83840942df92a7028134f9fbc766/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:34 compute-0 podman[283965]: 2025-12-13 04:33:34.746406288 +0000 UTC m=+0.140698016 container init 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Dec 13 04:33:34 compute-0 podman[283965]: 2025-12-13 04:33:34.756638597 +0000 UTC m=+0.150930325 container start 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:33:34 compute-0 podman[283965]: 2025-12-13 04:33:34.760366158 +0000 UTC m=+0.154657866 container attach 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:33:35 compute-0 gracious_shockley[283982]: {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     "0": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "devices": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "/dev/loop3"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             ],
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_name": "ceph_lv0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_size": "21470642176",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "name": "ceph_lv0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "tags": {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_name": "ceph",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.crush_device_class": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.encrypted": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.objectstore": "bluestore",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_id": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.vdo": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.with_tpm": "0"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             },
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "vg_name": "ceph_vg0"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         }
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     ],
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     "1": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "devices": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "/dev/loop4"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             ],
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_name": "ceph_lv1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_size": "21470642176",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "name": "ceph_lv1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "tags": {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_name": "ceph",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.crush_device_class": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.encrypted": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.objectstore": "bluestore",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_id": "1",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.vdo": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.with_tpm": "0"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             },
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "vg_name": "ceph_vg1"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         }
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     ],
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     "2": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "devices": [
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "/dev/loop5"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             ],
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_name": "ceph_lv2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_size": "21470642176",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "name": "ceph_lv2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "tags": {
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.cluster_name": "ceph",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.crush_device_class": "",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.encrypted": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.objectstore": "bluestore",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osd_id": "2",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.vdo": "0",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:                 "ceph.with_tpm": "0"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             },
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "type": "block",
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:             "vg_name": "ceph_vg2"
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:         }
Dec 13 04:33:35 compute-0 gracious_shockley[283982]:     ]
Dec 13 04:33:35 compute-0 gracious_shockley[283982]: }
Dec 13 04:33:35 compute-0 systemd[1]: libpod-521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792.scope: Deactivated successfully.
Dec 13 04:33:35 compute-0 podman[283965]: 2025-12-13 04:33:35.076093932 +0000 UTC m=+0.470385660 container died 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Dec 13 04:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-c180ea01e9d3feced879739bcb658ab15fbe83840942df92a7028134f9fbc766-merged.mount: Deactivated successfully.
Dec 13 04:33:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:33:35.113 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:33:35.115 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:33:35.117 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:35 compute-0 podman[283965]: 2025-12-13 04:33:35.122931275 +0000 UTC m=+0.517222983 container remove 521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_shockley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:33:35 compute-0 systemd[1]: libpod-conmon-521ceca72d9338941573357d30063ca9e78ba4aab388bcf35d30bca88d64b792.scope: Deactivated successfully.
Dec 13 04:33:35 compute-0 sudo[283887]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:35 compute-0 ceph-mon[75071]: pgmap v1974: 305 pgs: 305 active+clean; 352 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 139 KiB/s wr, 44 op/s
Dec 13 04:33:35 compute-0 sudo[284002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:33:35 compute-0 sudo[284002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:35 compute-0 sudo[284002]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:35 compute-0 sudo[284027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.308 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.308 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:35 compute-0 sudo[284027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.319 243708 DEBUG nova.objects.instance [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.353 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.524 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.524 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.525 243708 INFO nova.compute.manager [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attaching volume 36f752fb-927d-448e-a92e-c58e5bad4513 to /dev/vdb
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.590530058 +0000 UTC m=+0.047172844 container create 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:33:35 compute-0 systemd[1]: Started libpod-conmon-0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce.scope.
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.565856987 +0000 UTC m=+0.022499773 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.664 243708 DEBUG os_brick.utils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.665 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.677 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.677 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[17d515ec-7566-4901-b588-088cae058955]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.679 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.686 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.686 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[2b6d3abe-66af-474b-b067-9da646a90785]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:35 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.687 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.696 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.696 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a7c86edd-1afd-4d05-9bf9-510c2be32ebf]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.698 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[b61cb692-a811-457d-a597-23308c6ab903]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.699 243708 DEBUG oslo_concurrency.processutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.719 243708 DEBUG oslo_concurrency.processutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "nvme version" returned: 0 in 0.019s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.722 243708 DEBUG os_brick.initiator.connectors.lightos [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.722 243708 DEBUG os_brick.initiator.connectors.lightos [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.723 243708 DEBUG os_brick.initiator.connectors.lightos [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.723 243708 DEBUG os_brick.utils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] <== get_connector_properties: return (58ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.724 243708 DEBUG nova.virt.block_device [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating existing volume attachment record: 9b9c2a75-38db-4dca-bd17-fa30f4ef9785 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.734208924 +0000 UTC m=+0.190851740 container init 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.741896243 +0000 UTC m=+0.198539029 container start 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:33:35 compute-0 competent_mahavira[284080]: 167 167
Dec 13 04:33:35 compute-0 systemd[1]: libpod-0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce.scope: Deactivated successfully.
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.765803004 +0000 UTC m=+0.222445810 container attach 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:33:35 compute-0 podman[284064]: 2025-12-13 04:33:35.766567384 +0000 UTC m=+0.223210170 container died 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Dec 13 04:33:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-87f5f229fda1feca974f61e1c3292f4b1458533beb84478a4b2a4354d052799c-merged.mount: Deactivated successfully.
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:33:35 compute-0 nova_compute[243704]: 2025-12-13 04:33:35.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:33:36 compute-0 podman[284064]: 2025-12-13 04:33:36.097711957 +0000 UTC m=+0.554354743 container remove 0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mahavira, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec 13 04:33:36 compute-0 systemd[1]: libpod-conmon-0ed9cbdeba2acdc9ceee9a421dde9fb50f3c56072ddc723548d8ee3c3d9d06ce.scope: Deactivated successfully.
Dec 13 04:33:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 153 KiB/s wr, 76 op/s
Dec 13 04:33:36 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:33:36 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1616392571' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:36 compute-0 podman[284110]: 2025-12-13 04:33:36.374980006 +0000 UTC m=+0.142139076 container create 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Dec 13 04:33:36 compute-0 podman[284110]: 2025-12-13 04:33:36.282668076 +0000 UTC m=+0.049827126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:33:36 compute-0 systemd[1]: Started libpod-conmon-75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b.scope.
Dec 13 04:33:36 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99a69489dc483f0a67323c74c8ed8327df67bbb082cba1dbb61e3982329c66/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99a69489dc483f0a67323c74c8ed8327df67bbb082cba1dbb61e3982329c66/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99a69489dc483f0a67323c74c8ed8327df67bbb082cba1dbb61e3982329c66/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe99a69489dc483f0a67323c74c8ed8327df67bbb082cba1dbb61e3982329c66/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.711 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.724 243708 DEBUG nova.objects.instance [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.742 243708 DEBUG nova.virt.libvirt.driver [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to attach volume 36f752fb-927d-448e-a92e-c58e5bad4513 with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:33:36 compute-0 podman[284110]: 2025-12-13 04:33:36.74660014 +0000 UTC m=+0.513759200 container init 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.746 243708 DEBUG nova.virt.libvirt.guest [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:33:36 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-36f752fb-927d-448e-a92e-c58e5bad4513">
Dec 13 04:33:36 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:33:36 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:36 compute-0 nova_compute[243704]:   <serial>36f752fb-927d-448e-a92e-c58e5bad4513</serial>
Dec 13 04:33:36 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:36 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:33:36 compute-0 podman[284110]: 2025-12-13 04:33:36.82347672 +0000 UTC m=+0.590635760 container start 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:33:36 compute-0 podman[284110]: 2025-12-13 04:33:36.843391651 +0000 UTC m=+0.610550711 container attach 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.947 243708 DEBUG nova.virt.libvirt.driver [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.948 243708 DEBUG nova.virt.libvirt.driver [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.948 243708 DEBUG nova.virt.libvirt.driver [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:36 compute-0 nova_compute[243704]: 2025-12-13 04:33:36.949 243708 DEBUG nova.virt.libvirt.driver [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No VIF found with MAC fa:16:3e:83:10:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:33:37 compute-0 lvm[284223]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:33:37 compute-0 lvm[284223]: VG ceph_vg0 finished
Dec 13 04:33:37 compute-0 lvm[284227]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:33:37 compute-0 lvm[284227]: VG ceph_vg2 finished
Dec 13 04:33:37 compute-0 lvm[284226]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:33:37 compute-0 lvm[284226]: VG ceph_vg1 finished
Dec 13 04:33:37 compute-0 vibrant_easley[284126]: {}
Dec 13 04:33:37 compute-0 systemd[1]: libpod-75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b.scope: Deactivated successfully.
Dec 13 04:33:37 compute-0 podman[284110]: 2025-12-13 04:33:37.62052251 +0000 UTC m=+1.387681550 container died 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:33:37 compute-0 systemd[1]: libpod-75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b.scope: Consumed 1.329s CPU time.
Dec 13 04:33:37 compute-0 ceph-mon[75071]: pgmap v1975: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 153 KiB/s wr, 76 op/s
Dec 13 04:33:37 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1616392571' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 123 KiB/s wr, 62 op/s
Dec 13 04:33:38 compute-0 nova_compute[243704]: 2025-12-13 04:33:38.147 243708 DEBUG oslo_concurrency.lockutils [None req-9c422dd8-379a-4d4f-9a51-17e02b8eef60 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 2.623s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-fe99a69489dc483f0a67323c74c8ed8327df67bbb082cba1dbb61e3982329c66-merged.mount: Deactivated successfully.
Dec 13 04:33:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:38 compute-0 podman[284110]: 2025-12-13 04:33:38.937886357 +0000 UTC m=+2.705045397 container remove 75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Dec 13 04:33:38 compute-0 sudo[284027]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:38 compute-0 systemd[1]: libpod-conmon-75444de9f6039442b737cf0bd2c166a116072a6955a8c9dcbcef98d6803bf49b.scope: Deactivated successfully.
Dec 13 04:33:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:33:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:33:39 compute-0 podman[284243]: 2025-12-13 04:33:39.039051888 +0000 UTC m=+0.695197503 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec 13 04:33:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:39 compute-0 sudo[284262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:33:39 compute-0 sudo[284262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:33:39 compute-0 sudo[284262]: pam_unix(sudo:session): session closed for user root
Dec 13 04:33:39 compute-0 nova_compute[243704]: 2025-12-13 04:33:39.432 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:39 compute-0 ceph-mon[75071]: pgmap v1976: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 123 KiB/s wr, 62 op/s
Dec 13 04:33:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:39 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 84 KiB/s wr, 44 op/s
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.234 243708 DEBUG oslo_concurrency.lockutils [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.234 243708 DEBUG oslo_concurrency.lockutils [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.255 243708 INFO nova.compute.manager [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Detaching volume 36f752fb-927d-448e-a92e-c58e5bad4513
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.399 243708 INFO nova.virt.block_device [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to driver detach volume 36f752fb-927d-448e-a92e-c58e5bad4513 from mountpoint /dev/vdb
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.409 243708 DEBUG nova.virt.libvirt.driver [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Attempting to detach device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.410 243708 DEBUG nova.virt.libvirt.guest [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-36f752fb-927d-448e-a92e-c58e5bad4513">
Dec 13 04:33:40 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <serial>36f752fb-927d-448e-a92e-c58e5bad4513</serial>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:40 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.427 243708 INFO nova.virt.libvirt.driver [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config.
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.427 243708 DEBUG nova.virt.libvirt.driver [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.428 243708 DEBUG nova.virt.libvirt.guest [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-36f752fb-927d-448e-a92e-c58e5bad4513">
Dec 13 04:33:40 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <serial>36f752fb-927d-448e-a92e-c58e5bad4513</serial>
Dec 13 04:33:40 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:40 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:40 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.552 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765600420.5519857, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.554 243708 DEBUG nova.virt.libvirt.driver [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.556 243708 INFO nova.virt.libvirt.driver [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config.
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:33:40
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.control', '.rgw.root', 'vms', 'backups']
Dec 13 04:33:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.712 243708 DEBUG nova.objects.instance [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:40 compute-0 nova_compute[243704]: 2025-12-13 04:33:40.758 243708 DEBUG oslo_concurrency.lockutils [None req-4fdb0a2f-309c-409b-ae6a-4d24fe384d43 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:40 compute-0 ceph-mon[75071]: pgmap v1977: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 101 KiB/s rd, 84 KiB/s wr, 44 op/s
Dec 13 04:33:41 compute-0 nova_compute[243704]: 2025-12-13 04:33:41.713 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:41 compute-0 podman[284289]: 2025-12-13 04:33:41.920999154 +0000 UTC m=+0.068457472 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 77 KiB/s wr, 40 op/s
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:33:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:33:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:33:43 compute-0 ceph-mon[75071]: pgmap v1978: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 77 KiB/s wr, 40 op/s
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.481 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.481 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.495 243708 DEBUG nova.objects.instance [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.535 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.054s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.795 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.796 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.796 243708 INFO nova.compute.manager [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attaching volume 57d0d466-3681-4f69-801c-5683eb0bbd0c to /dev/vdb
Dec 13 04:33:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.941 243708 DEBUG os_brick.utils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.942 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.951 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.952 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[a05ad983-f75b-417e-9039-525337fc5c63]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.953 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.961 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.008s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.961 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[cb9e3cee-8c7c-43e0-93f2-f63c8d144e45]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.962 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.969 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.007s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.969 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4574f6bc-c501-4200-a86c-8f0d2e85a9af]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.971 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[d1ed6034-4880-4537-b2c2-61326004cd71]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.971 243708 DEBUG oslo_concurrency.processutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.994 243708 DEBUG oslo_concurrency.processutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "nvme version" returned: 0 in 0.023s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.996 243708 DEBUG os_brick.initiator.connectors.lightos [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.997 243708 DEBUG os_brick.initiator.connectors.lightos [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.997 243708 DEBUG os_brick.initiator.connectors.lightos [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.997 243708 DEBUG os_brick.utils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] <== get_connector_properties: return (55ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:33:43 compute-0 nova_compute[243704]: 2025-12-13 04:33:43.997 243708 DEBUG nova.virt.block_device [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating existing volume attachment record: 5d49a0a9-0d82-4c08-a5d1-d21222871003 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:33:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 84 KiB/s wr, 45 op/s
Dec 13 04:33:44 compute-0 nova_compute[243704]: 2025-12-13 04:33:44.434 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:33:44 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1972946639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:44 compute-0 nova_compute[243704]: 2025-12-13 04:33:44.833 243708 DEBUG nova.objects.instance [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:44 compute-0 nova_compute[243704]: 2025-12-13 04:33:44.861 243708 DEBUG nova.virt.libvirt.driver [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to attach volume 57d0d466-3681-4f69-801c-5683eb0bbd0c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:33:44 compute-0 nova_compute[243704]: 2025-12-13 04:33:44.863 243708 DEBUG nova.virt.libvirt.guest [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:33:44 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-57d0d466-3681-4f69-801c-5683eb0bbd0c">
Dec 13 04:33:44 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:33:44 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:44 compute-0 nova_compute[243704]:   <serial>57d0d466-3681-4f69-801c-5683eb0bbd0c</serial>
Dec 13 04:33:44 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:44 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:33:45 compute-0 nova_compute[243704]: 2025-12-13 04:33:45.045 243708 DEBUG nova.virt.libvirt.driver [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:45 compute-0 nova_compute[243704]: 2025-12-13 04:33:45.045 243708 DEBUG nova.virt.libvirt.driver [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:45 compute-0 nova_compute[243704]: 2025-12-13 04:33:45.045 243708 DEBUG nova.virt.libvirt.driver [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:45 compute-0 nova_compute[243704]: 2025-12-13 04:33:45.045 243708 DEBUG nova.virt.libvirt.driver [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No VIF found with MAC fa:16:3e:83:10:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:33:45 compute-0 nova_compute[243704]: 2025-12-13 04:33:45.318 243708 DEBUG oslo_concurrency.lockutils [None req-bc4df861-709e-4025-a293-70da24709d3a 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:45 compute-0 ceph-mon[75071]: pgmap v1979: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 84 KiB/s wr, 45 op/s
Dec 13 04:33:45 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1972946639' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:33:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396936764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:33:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/396936764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:33:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 73 KiB/s wr, 51 op/s
Dec 13 04:33:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/396936764' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/396936764' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:33:46 compute-0 nova_compute[243704]: 2025-12-13 04:33:46.758 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:47 compute-0 ceph-mon[75071]: pgmap v1980: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 73 KiB/s wr, 51 op/s
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.074 243708 DEBUG oslo_concurrency.lockutils [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.074 243708 DEBUG oslo_concurrency.lockutils [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.087 243708 INFO nova.compute.manager [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Detaching volume 57d0d466-3681-4f69-801c-5683eb0bbd0c
Dec 13 04:33:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 63 KiB/s wr, 29 op/s
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.213 243708 INFO nova.virt.block_device [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to driver detach volume 57d0d466-3681-4f69-801c-5683eb0bbd0c from mountpoint /dev/vdb
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.223 243708 DEBUG nova.virt.libvirt.driver [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Attempting to detach device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.224 243708 DEBUG nova.virt.libvirt.guest [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-57d0d466-3681-4f69-801c-5683eb0bbd0c">
Dec 13 04:33:48 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <serial>57d0d466-3681-4f69-801c-5683eb0bbd0c</serial>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:48 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.248 243708 INFO nova.virt.libvirt.driver [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config.
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.249 243708 DEBUG nova.virt.libvirt.driver [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.249 243708 DEBUG nova.virt.libvirt.guest [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-57d0d466-3681-4f69-801c-5683eb0bbd0c">
Dec 13 04:33:48 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <serial>57d0d466-3681-4f69-801c-5683eb0bbd0c</serial>
Dec 13 04:33:48 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:48 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:48 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.346 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765600428.3463864, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.350 243708 DEBUG nova.virt.libvirt.driver [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.353 243708 INFO nova.virt.libvirt.driver [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config.
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.515 243708 DEBUG nova.objects.instance [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:48 compute-0 nova_compute[243704]: 2025-12-13 04:33:48.571 243708 DEBUG oslo_concurrency.lockutils [None req-48702697-af6c-4c8b-85d0-36fe4400e141 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:49 compute-0 nova_compute[243704]: 2025-12-13 04:33:49.437 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:49 compute-0 ceph-mon[75071]: pgmap v1981: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 80 KiB/s rd, 63 KiB/s wr, 29 op/s
Dec 13 04:33:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 119 KiB/s wr, 43 op/s
Dec 13 04:33:51 compute-0 nova_compute[243704]: 2025-12-13 04:33:51.435 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:51 compute-0 nova_compute[243704]: 2025-12-13 04:33:51.436 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:51 compute-0 nova_compute[243704]: 2025-12-13 04:33:51.453 243708 DEBUG nova.objects.instance [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:51 compute-0 ceph-mon[75071]: pgmap v1982: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 119 KiB/s wr, 43 op/s
Dec 13 04:33:51 compute-0 nova_compute[243704]: 2025-12-13 04:33:51.565 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.reserve_block_device_name.<locals>.do_reserve" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:51 compute-0 nova_compute[243704]: 2025-12-13 04:33:51.763 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 71 KiB/s wr, 31 op/s
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.287 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.288 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.288 243708 INFO nova.compute.manager [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attaching volume cf7bdec9-6748-4747-98c8-d91576d0531c to /dev/vdb
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.422 243708 DEBUG os_brick.utils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] ==> get_connector_properties: call "{'root_helper': 'sudo nova-rootwrap /etc/nova/rootwrap.conf', 'my_ip': '192.168.122.100', 'multipath': True, 'enforce_multipath': True, 'host': 'compute-0.ctlplane.example.com', 'execute': None}" trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:176
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.425 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): multipathd show status execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.438 250512 DEBUG oslo_concurrency.processutils [-] CMD "multipathd show status" returned: 0 in 0.013s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.439 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[39230068-9ba8-4a20-90ce-c51702916bae]: (4, ('path checker states:\n\npaths: 0\nbusy: False\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.441 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): cat /etc/iscsi/initiatorname.iscsi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.450 250512 DEBUG oslo_concurrency.processutils [-] CMD "cat /etc/iscsi/initiatorname.iscsi" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.451 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[3315c67f-a6da-4d5f-8613-6038940d07aa]: (4, ('InitiatorName=iqn.1994-05.com.redhat:4ca244c1298', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.453 250512 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): findmnt -v / -n -o SOURCE execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.464 250512 DEBUG oslo_concurrency.processutils [-] CMD "findmnt -v / -n -o SOURCE" returned: 0 in 0.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.464 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[4bd20759-df1b-4509-94d6-54c4cd78bdb7]: (4, ('overlay\n', '')) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.466 250512 DEBUG oslo.privsep.daemon [-] privsep: reply[118f2ff5-6e9e-4359-b974-e8db1490c3ca]: (4, '90cce6d2-aa09-4bc1-a87e-fb31e9108c78') _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.466 243708 DEBUG oslo_concurrency.processutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): nvme version execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.495 243708 DEBUG oslo_concurrency.processutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "nvme version" returned: 0 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.497 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: [Errno 111] ECONNREFUSED find_dsc /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:98
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.498 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: did not find dsc, continuing anyway. get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:76
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.498 243708 DEBUG os_brick.initiator.connectors.lightos [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] LIGHTOS: finally hostnqn: nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4 dsc:  get_connector_properties /usr/lib/python3.9/site-packages/os_brick/initiator/connectors/lightos.py:79
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.498 243708 DEBUG os_brick.utils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] <== get_connector_properties: return (75ms) {'platform': 'x86_64', 'os_type': 'linux', 'ip': '192.168.122.100', 'host': 'compute-0.ctlplane.example.com', 'multipath': True, 'initiator': 'iqn.1994-05.com.redhat:4ca244c1298', 'do_local_attach': False, 'nvme_hostid': 'e61ebeb9-32de-4b3b-b463-d59237136be4', 'system uuid': '90cce6d2-aa09-4bc1-a87e-fb31e9108c78', 'nqn': 'nqn.2014-08.org.nvmexpress:uuid:e61ebeb9-32de-4b3b-b463-d59237136be4', 'nvme_native_multipath': True, 'found_dsc': ''} trace_logging_wrapper /usr/lib/python3.9/site-packages/os_brick/utils.py:203
Dec 13 04:33:52 compute-0 nova_compute[243704]: 2025-12-13 04:33:52.499 243708 DEBUG nova.virt.block_device [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating existing volume attachment record: 88d77111-cea3-4cf0-ae81-a3c6ca8c2fc0 _volume_attach /usr/lib/python3.9/site-packages/nova/virt/block_device.py:631
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007657407666351861 of space, bias 1.0, pg target 0.22972222999055583 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029469122448540093 of space, bias 1.0, pg target 0.8840736734562028 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4620595679755415e-06 of space, bias 1.0, pg target 0.0007386178703926625 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667018720722839 of space, bias 1.0, pg target 0.20001056162168515 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3175820779573749e-06 of space, bias 4.0, pg target 0.0015810984935488498 quantized to 16 (current 16)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:33:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:33:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Dec 13 04:33:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2075922433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.475 243708 DEBUG nova.objects.instance [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.501 243708 DEBUG nova.virt.libvirt.driver [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to attach volume cf7bdec9-6748-4747-98c8-d91576d0531c with discard support enabled to an instance using an unsupported configuration. target_bus = virtio. Trim commands will not be issued to the storage device. _check_discard_for_attach_volume /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2168
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.505 243708 DEBUG nova.virt.libvirt.guest [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] attach device xml: <disk type="network" device="disk">
Dec 13 04:33:53 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-cf7bdec9-6748-4747-98c8-d91576d0531c">
Dec 13 04:33:53 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   <auth username="openstack">
Dec 13 04:33:53 compute-0 nova_compute[243704]:     <secret type="ceph" uuid="437a9f04-06b7-56e3-8a4b-f52a1199dd32"/>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   </auth>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:53 compute-0 nova_compute[243704]:   <serial>cf7bdec9-6748-4747-98c8-d91576d0531c</serial>
Dec 13 04:33:53 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:53 compute-0 nova_compute[243704]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Dec 13 04:33:53 compute-0 ceph-mon[75071]: pgmap v1983: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 71 KiB/s wr, 31 op/s
Dec 13 04:33:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2075922433' entity='client.openstack' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.708 243708 DEBUG nova.virt.libvirt.driver [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.709 243708 DEBUG nova.virt.libvirt.driver [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.709 243708 DEBUG nova.virt.libvirt.driver [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.710 243708 DEBUG nova.virt.libvirt.driver [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] No VIF found with MAC fa:16:3e:83:10:ef, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Dec 13 04:33:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:53 compute-0 nova_compute[243704]: 2025-12-13 04:33:53.933 243708 DEBUG oslo_concurrency.lockutils [None req-c8281dd0-4d29-4f1a-aef5-6367d8c743a2 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.attach_volume.<locals>.do_attach_volume" :: held 1.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 73 KiB/s wr, 36 op/s
Dec 13 04:33:54 compute-0 nova_compute[243704]: 2025-12-13 04:33:54.440 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:55 compute-0 ceph-mon[75071]: pgmap v1984: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 89 KiB/s rd, 73 KiB/s wr, 36 op/s
Dec 13 04:33:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 59 KiB/s wr, 41 op/s
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.310 243708 DEBUG oslo_concurrency.lockutils [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.311 243708 DEBUG oslo_concurrency.lockutils [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.330 243708 INFO nova.compute.manager [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Detaching volume cf7bdec9-6748-4747-98c8-d91576d0531c
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.512 243708 INFO nova.virt.block_device [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Attempting to driver detach volume cf7bdec9-6748-4747-98c8-d91576d0531c from mountpoint /dev/vdb
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.522 243708 DEBUG nova.virt.libvirt.driver [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Attempting to detach device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.523 243708 DEBUG nova.virt.libvirt.guest [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-cf7bdec9-6748-4747-98c8-d91576d0531c">
Dec 13 04:33:56 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <serial>cf7bdec9-6748-4747-98c8-d91576d0531c</serial>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:56 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.533 243708 INFO nova.virt.libvirt.driver [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the persistent domain config.
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.533 243708 DEBUG nova.virt.libvirt.driver [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] (1/8): Attempting to detach device vdb with device alias virtio-disk1 from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.534 243708 DEBUG nova.virt.libvirt.guest [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] detach device xml: <disk type="network" device="disk">
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <driver name="qemu" type="raw" cache="none" discard="unmap"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <source protocol="rbd" name="volumes/volume-cf7bdec9-6748-4747-98c8-d91576d0531c">
Dec 13 04:33:56 compute-0 nova_compute[243704]:     <host name="192.168.122.100" port="6789"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   </source>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <target dev="vdb" bus="virtio"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <serial>cf7bdec9-6748-4747-98c8-d91576d0531c</serial>
Dec 13 04:33:56 compute-0 nova_compute[243704]:   <address type="pci" domain="0x0000" bus="0x06" slot="0x00" function="0x0"/>
Dec 13 04:33:56 compute-0 nova_compute[243704]: </disk>
Dec 13 04:33:56 compute-0 nova_compute[243704]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.655 243708 DEBUG nova.virt.libvirt.driver [None req-84cab793-354a-43c8-b80f-d9d3c0bb8ca8 - - - - - -] Received event <DeviceRemovedEvent: 1765600436.6550667, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => virtio-disk1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.657 243708 DEBUG nova.virt.libvirt.driver [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Start waiting for the detach event from libvirt for device vdb with device alias virtio-disk1 for instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.661 243708 INFO nova.virt.libvirt.driver [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully detached device vdb from instance 0dd460c9-84b7-4ae0-a559-418f54258fe1 from the live domain config.
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.769 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.870 243708 DEBUG nova.objects.instance [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'flavor' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:33:56 compute-0 nova_compute[243704]: 2025-12-13 04:33:56.917 243708 DEBUG oslo_concurrency.lockutils [None req-e6b298e7-ff25-4438-88f1-7b8c02965a53 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.detach_volume.<locals>.do_detach_volume" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:33:57 compute-0 ceph-mon[75071]: pgmap v1985: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 59 KiB/s wr, 41 op/s
Dec 13 04:33:57 compute-0 podman[284367]: 2025-12-13 04:33:57.968792873 +0000 UTC m=+0.109397945 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 04:33:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 58 KiB/s wr, 31 op/s
Dec 13 04:33:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:33:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303845183' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:33:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1303845183' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:33:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1303845183' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1303845183' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:33:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:33:59 compute-0 nova_compute[243704]: 2025-12-13 04:33:59.443 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:33:59 compute-0 ceph-mon[75071]: pgmap v1986: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 83 KiB/s rd, 58 KiB/s wr, 31 op/s
Dec 13 04:33:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:33:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3250648325' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:33:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:33:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3250648325' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 127 KiB/s wr, 62 op/s
Dec 13 04:34:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3250648325' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:00 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3250648325' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:34:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822990193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:34:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2822990193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:01 compute-0 ceph-mon[75071]: pgmap v1987: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 164 KiB/s rd, 127 KiB/s wr, 62 op/s
Dec 13 04:34:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2822990193' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2822990193' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:01 compute-0 nova_compute[243704]: 2025-12-13 04:34:01.818 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 71 KiB/s wr, 48 op/s
Dec 13 04:34:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e488 do_prune osdmap full prune enabled
Dec 13 04:34:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e489 e489: 3 total, 3 up, 3 in
Dec 13 04:34:02 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e489: 3 total, 3 up, 3 in
Dec 13 04:34:03 compute-0 ceph-mon[75071]: pgmap v1988: 305 pgs: 305 active+clean; 354 MiB data, 699 MiB used, 59 GiB / 60 GiB avail; 94 KiB/s rd, 71 KiB/s wr, 48 op/s
Dec 13 04:34:03 compute-0 ceph-mon[75071]: osdmap e489: 3 total, 3 up, 3 in
Dec 13 04:34:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e489 do_prune osdmap full prune enabled
Dec 13 04:34:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e490 e490: 3 total, 3 up, 3 in
Dec 13 04:34:03 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e490: 3 total, 3 up, 3 in
Dec 13 04:34:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e490 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 105 KiB/s wr, 74 op/s
Dec 13 04:34:04 compute-0 nova_compute[243704]: 2025-12-13 04:34:04.445 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e490 do_prune osdmap full prune enabled
Dec 13 04:34:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e491 e491: 3 total, 3 up, 3 in
Dec 13 04:34:04 compute-0 ceph-mon[75071]: osdmap e490: 3 total, 3 up, 3 in
Dec 13 04:34:04 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e491: 3 total, 3 up, 3 in
Dec 13 04:34:05 compute-0 ceph-mon[75071]: pgmap v1991: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 142 KiB/s rd, 105 KiB/s wr, 74 op/s
Dec 13 04:34:05 compute-0 ceph-mon[75071]: osdmap e491: 3 total, 3 up, 3 in
Dec 13 04:34:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Dec 13 04:34:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:34:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3753011716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:34:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3753011716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3753011716' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/3753011716' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:06 compute-0 nova_compute[243704]: 2025-12-13 04:34:06.818 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.450 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.451 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.451 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.451 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.452 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.454 243708 INFO nova.compute.manager [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Terminating instance
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.455 243708 DEBUG nova.compute.manager [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Dec 13 04:34:07 compute-0 kernel: tapef748f46-f1 (unregistering): left promiscuous mode
Dec 13 04:34:07 compute-0 NetworkManager[48899]: <info>  [1765600447.5120] device (tapef748f46-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec 13 04:34:07 compute-0 ovn_controller[145204]: 2025-12-13T04:34:07Z|00283|binding|INFO|Releasing lport ef748f46-f14c-4151-878f-146280febd4e from this chassis (sb_readonly=0)
Dec 13 04:34:07 compute-0 ovn_controller[145204]: 2025-12-13T04:34:07Z|00284|binding|INFO|Setting lport ef748f46-f14c-4151-878f-146280febd4e down in Southbound
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.521 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 ovn_controller[145204]: 2025-12-13T04:34:07Z|00285|binding|INFO|Removing iface tapef748f46-f1 ovn-installed in OVS
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.527 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.558 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Dec 13 04:34:07 compute-0 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001e.scope: Consumed 18.324s CPU time.
Dec 13 04:34:07 compute-0 systemd-machined[206767]: Machine qemu-30-instance-0000001e terminated.
Dec 13 04:34:07 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:07.649 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:10:ef 10.100.0.14'], port_security=['fa:16:3e:83:10:ef 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '0dd460c9-84b7-4ae0-a559-418f54258fe1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-35b97038-818f-4818-aa78-03e50d5de529', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '6229053e06554ebebd8cbafe5a6dbb81', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a7583a8-ea86-4146-9e71-f4520807f9fb 438da6d3-6e05-4a13-8e69-07ef61fc8b32', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.247'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=819009fe-c70b-4887-8d44-8031dbdcb5fc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>], logical_port=ef748f46-f14c-4151-878f-146280febd4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fd7e692ffd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:34:07 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:07.651 154842 INFO neutron.agent.ovn.metadata.agent [-] Port ef748f46-f14c-4151-878f-146280febd4e in datapath 35b97038-818f-4818-aa78-03e50d5de529 unbound from our chassis
Dec 13 04:34:07 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:07.654 154842 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 35b97038-818f-4818-aa78-03e50d5de529, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Dec 13 04:34:07 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:07.657 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e2e451-3480-4da7-8c56-6beb3b95e746]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:07 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:07.658 154842 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-35b97038-818f-4818-aa78-03e50d5de529 namespace which is not needed anymore
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.691 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.700 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.706 243708 INFO nova.virt.libvirt.driver [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Instance destroyed successfully.
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.706 243708 DEBUG nova.objects.instance [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lazy-loading 'resources' on Instance uuid 0dd460c9-84b7-4ae0-a559-418f54258fe1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.722 243708 DEBUG nova.virt.libvirt.vif [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-13T04:32:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SnapshotDataIntegrityTests-server-758538397',display_name='tempest-SnapshotDataIntegrityTests-server-758538397',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-snapshotdataintegritytests-server-758538397',id=30,image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKvmiK6kmoAIL/+Yrs4aEMDG71viPy2FBqEZs8wU5VTGSSRBZ+Kvlm3x7ap9w+ejjteItxk+BAjtf+s3CecR0+wvBssolKT/KIgL22+FhDRrK4GgwbAWXAFIzWQFTqKdkw==',key_name='tempest-keypair-1348722630',keypairs=<?>,launch_index=0,launched_at=2025-12-13T04:32:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='6229053e06554ebebd8cbafe5a6dbb81',ramdisk_id='',reservation_id='r-9e3eng4c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='36cf6469-9e96-4186-bf30-37c785f25db6',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SnapshotDataIntegrityTests-1217316932',owner_user_name='tempest-SnapshotDataIntegrityTests-1217316932-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-13T04:32:59Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8c473655a64a434b9a574fee057cb112',uuid=0dd460c9-84b7-4ae0-a559-418f54258fe1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.723 243708 DEBUG nova.network.os_vif_util [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converting VIF {"id": "ef748f46-f14c-4151-878f-146280febd4e", "address": "fa:16:3e:83:10:ef", "network": {"id": "35b97038-818f-4818-aa78-03e50d5de529", "bridge": "br-int", "label": "tempest-SnapshotDataIntegrityTests-940185968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.247", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "6229053e06554ebebd8cbafe5a6dbb81", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapef748f46-f1", "ovs_interfaceid": "ef748f46-f14c-4151-878f-146280febd4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.724 243708 DEBUG nova.network.os_vif_util [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.725 243708 DEBUG os_vif [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.728 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 21 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.729 243708 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapef748f46-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.732 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.734 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Dec 13 04:34:07 compute-0 nova_compute[243704]: 2025-12-13 04:34:07.742 243708 INFO os_vif [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:10:ef,bridge_name='br-int',has_traffic_filtering=True,id=ef748f46-f14c-4151-878f-146280febd4e,network=Network(35b97038-818f-4818-aa78-03e50d5de529),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapef748f46-f1')
Dec 13 04:34:07 compute-0 ceph-mon[75071]: pgmap v1993: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Dec 13 04:34:07 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [NOTICE]   (283434) : haproxy version is 2.8.14-c23fe91
Dec 13 04:34:07 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [NOTICE]   (283434) : path to executable is /usr/sbin/haproxy
Dec 13 04:34:07 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [WARNING]  (283434) : Exiting Master process...
Dec 13 04:34:07 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [ALERT]    (283434) : Current worker (283452) exited with code 143 (Terminated)
Dec 13 04:34:07 compute-0 neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529[283430]: [WARNING]  (283434) : All workers exited. Exiting... (0)
Dec 13 04:34:07 compute-0 systemd[1]: libpod-1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d.scope: Deactivated successfully.
Dec 13 04:34:07 compute-0 conmon[283430]: conmon 1e5cf89fb4cbcbf6a188 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d.scope/container/memory.events
Dec 13 04:34:07 compute-0 podman[284449]: 2025-12-13 04:34:07.906851752 +0000 UTC m=+0.057581607 container died 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Dec 13 04:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d-userdata-shm.mount: Deactivated successfully.
Dec 13 04:34:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-616e003ab7ed0d3f691481dfa53bdd84feccd2a0241f0d6385054908e6a31a3f-merged.mount: Deactivated successfully.
Dec 13 04:34:07 compute-0 podman[284449]: 2025-12-13 04:34:07.950425877 +0000 UTC m=+0.101155742 container cleanup 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Dec 13 04:34:07 compute-0 systemd[1]: libpod-conmon-1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d.scope: Deactivated successfully.
Dec 13 04:34:08 compute-0 podman[284479]: 2025-12-13 04:34:08.033403753 +0000 UTC m=+0.052139138 container remove 1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.041 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[9a04cc5c-58a8-4922-a194-71ad2c7f5fe9]: (4, ('Sat Dec 13 04:34:07 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529 (1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d)\n1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d\nSat Dec 13 04:34:07 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-35b97038-818f-4818-aa78-03e50d5de529 (1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d)\n1e5cf89fb4cbcbf6a18868dc74212bf72484bf7489c136ceb2a5dc908ff04e7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.043 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[e79edfd4-cdbe-4818-b611-9327519e3ed4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.044 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap35b97038-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.050 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:08 compute-0 kernel: tap35b97038-80: left promiscuous mode
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.052 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.056 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[45058a9f-099b-4a86-ab1b-f96c4c41a79f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.065 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.074 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[db45e13c-83bd-460e-b7ab-b770781d5211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.075 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[aca86372-9c5e-4c2b-9e76-1ef7d5cac891]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.082 243708 INFO nova.virt.libvirt.driver [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Deleting instance files /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1_del
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.083 243708 INFO nova.virt.libvirt.driver [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Deletion of /var/lib/nova/instances/0dd460c9-84b7-4ae0-a559-418f54258fe1_del complete
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.099 249645 DEBUG oslo.privsep.daemon [-] privsep: reply[d8c424d8-af20-4923-9a15-d801172ffcdc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 511647, 'reachable_time': 42979, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284493, 'error': None, 'target': 'ovnmeta-35b97038-818f-4818-aa78-03e50d5de529', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.102 155258 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-35b97038-818f-4818-aa78-03e50d5de529 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.103 155258 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2aaa36-4967-4d2e-9a9c-0b2d7232b1d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Dec 13 04:34:08 compute-0 systemd[1]: run-netns-ovnmeta\x2d35b97038\x2d818f\x2d4818\x2daa78\x2d03e50d5de529.mount: Deactivated successfully.
Dec 13 04:34:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.173 243708 INFO nova.compute.manager [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Took 0.72 seconds to destroy the instance on the hypervisor.
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.174 243708 DEBUG oslo.service.loopingcall [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.176 243708 DEBUG nova.compute.manager [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.176 243708 DEBUG nova.network.neutron [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.308 243708 DEBUG nova.compute.manager [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-unplugged-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.310 243708 DEBUG oslo_concurrency.lockutils [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.310 243708 DEBUG oslo_concurrency.lockutils [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.310 243708 DEBUG oslo_concurrency.lockutils [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.311 243708 DEBUG nova.compute.manager [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] No waiting events found dispatching network-vif-unplugged-ef748f46-f14c-4151-878f-146280febd4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.311 243708 DEBUG nova.compute.manager [req-e710583e-64bb-4aa9-afa1-16608e78863a req-efb00ce5-33f2-47da-a82d-afdce60308ea 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-unplugged-ef748f46-f14c-4151-878f-146280febd4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.574 154842 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:7b:9a', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '9a:80:14:c0:98:db'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Dec 13 04:34:08 compute-0 nova_compute[243704]: 2025-12-13 04:34:08.576 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:08 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:08.576 154842 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Dec 13 04:34:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e491 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e491 do_prune osdmap full prune enabled
Dec 13 04:34:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e492 e492: 3 total, 3 up, 3 in
Dec 13 04:34:08 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e492: 3 total, 3 up, 3 in
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.447 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.736 243708 DEBUG nova.network.neutron [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.756 243708 INFO nova.compute.manager [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Took 1.58 seconds to deallocate network for instance.
Dec 13 04:34:09 compute-0 ceph-mon[75071]: pgmap v1994: 305 pgs: 305 active+clean; 353 MiB data, 698 MiB used, 59 GiB / 60 GiB avail; 92 KiB/s rd, 5.2 KiB/s wr, 122 op/s
Dec 13 04:34:09 compute-0 ceph-mon[75071]: osdmap e492: 3 total, 3 up, 3 in
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.804 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.804 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:09 compute-0 nova_compute[243704]: 2025-12-13 04:34:09.870 243708 DEBUG oslo_concurrency.processutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:34:09 compute-0 podman[284494]: 2025-12-13 04:34:09.923113801 +0000 UTC m=+0.069287625 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:34:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 7.6 KiB/s wr, 175 op/s
Dec 13 04:34:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:34:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3624138297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.427 243708 DEBUG nova.compute.manager [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.428 243708 DEBUG oslo_concurrency.lockutils [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Acquiring lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.429 243708 DEBUG oslo_concurrency.lockutils [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.429 243708 DEBUG oslo_concurrency.lockutils [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.430 243708 DEBUG nova.compute.manager [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] No waiting events found dispatching network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.430 243708 WARNING nova.compute.manager [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received unexpected event network-vif-plugged-ef748f46-f14c-4151-878f-146280febd4e for instance with vm_state deleted and task_state None.
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.431 243708 DEBUG nova.compute.manager [req-81c6dc62-92bd-4d22-8d26-758503046423 req-94f5b748-aa72-439b-a157-8cb5cdb1b7d0 8148e11bdaf64808afed9efe8e9570ae 7487648fe18e4ff2982b0fa9368de7af - - default default] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Received event network-vif-deleted-ef748f46-f14c-4151-878f-146280febd4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.432 243708 DEBUG oslo_concurrency.processutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.441 243708 DEBUG nova.compute.provider_tree [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.461 243708 DEBUG nova.scheduler.client.report [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.490 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.686s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.541 243708 INFO nova.scheduler.client.report [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Deleted allocations for instance 0dd460c9-84b7-4ae0-a559-418f54258fe1
Dec 13 04:34:10 compute-0 nova_compute[243704]: 2025-12-13 04:34:10.644 243708 DEBUG oslo_concurrency.lockutils [None req-ae010985-252b-4c9e-a5ae-1bf18c864d55 8c473655a64a434b9a574fee057cb112 6229053e06554ebebd8cbafe5a6dbb81 - - default default] Lock "0dd460c9-84b7-4ae0-a559-418f54258fe1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.194s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3624138297' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:11 compute-0 ceph-mon[75071]: pgmap v1996: 305 pgs: 305 active+clean; 271 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 128 KiB/s rd, 7.6 KiB/s wr, 175 op/s
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 6.1 KiB/s wr, 142 op/s
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:12 compute-0 nova_compute[243704]: 2025-12-13 04:34:12.732 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:12 compute-0 podman[284534]: 2025-12-13 04:34:12.952972317 +0000 UTC m=+0.093255976 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd)
Dec 13 04:34:13 compute-0 ceph-mon[75071]: pgmap v1997: 305 pgs: 305 active+clean; 271 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 103 KiB/s rd, 6.1 KiB/s wr, 142 op/s
Dec 13 04:34:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e492 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e492 do_prune osdmap full prune enabled
Dec 13 04:34:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 e493: 3 total, 3 up, 3 in
Dec 13 04:34:13 compute-0 ceph-mon[75071]: log_channel(cluster) log [DBG] : osdmap e493: 3 total, 3 up, 3 in
Dec 13 04:34:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Dec 13 04:34:14 compute-0 nova_compute[243704]: 2025-12-13 04:34:14.449 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:14 compute-0 ceph-mon[75071]: osdmap e493: 3 total, 3 up, 3 in
Dec 13 04:34:14 compute-0 ceph-mon[75071]: pgmap v1999: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Dec 13 04:34:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Dec 13 04:34:17 compute-0 ceph-mon[75071]: pgmap v2000: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 3.5 KiB/s wr, 78 op/s
Dec 13 04:34:17 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:17.579 154842 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9c764fca-6428-461c-aead-7964805997a5, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Dec 13 04:34:17 compute-0 nova_compute[243704]: 2025-12-13 04:34:17.735 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Dec 13 04:34:18 compute-0 nova_compute[243704]: 2025-12-13 04:34:18.462 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:18 compute-0 nova_compute[243704]: 2025-12-13 04:34:18.580 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:19 compute-0 ceph-mon[75071]: pgmap v2001: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 30 KiB/s rd, 1.2 KiB/s wr, 39 op/s
Dec 13 04:34:19 compute-0 nova_compute[243704]: 2025-12-13 04:34:19.452 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:21 compute-0 ceph-mon[75071]: pgmap v2002: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:22 compute-0 nova_compute[243704]: 2025-12-13 04:34:22.703 243708 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765600447.7024367, 0dd460c9-84b7-4ae0-a559-418f54258fe1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Dec 13 04:34:22 compute-0 nova_compute[243704]: 2025-12-13 04:34:22.704 243708 INFO nova.compute.manager [-] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] VM Stopped (Lifecycle Event)
Dec 13 04:34:22 compute-0 nova_compute[243704]: 2025-12-13 04:34:22.730 243708 DEBUG nova.compute.manager [None req-143b83ea-8249-4b03-b56c-5f08b606563e - - - - - -] [instance: 0dd460c9-84b7-4ae0-a559-418f54258fe1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Dec 13 04:34:22 compute-0 nova_compute[243704]: 2025-12-13 04:34:22.737 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:23 compute-0 ceph-mon[75071]: pgmap v2003: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:23 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:24 compute-0 nova_compute[243704]: 2025-12-13 04:34:24.453 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:25 compute-0 ceph-mon[75071]: pgmap v2004: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.898 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.899 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.899 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:34:26 compute-0 nova_compute[243704]: 2025-12-13 04:34:26.899 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:34:27 compute-0 ceph-mon[75071]: pgmap v2005: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:27 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:34:27 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1935729577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.436 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.633 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.635 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4268MB free_disk=59.988048671744764GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.635 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.635 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.717 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.718 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.733 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:34:27 compute-0 nova_compute[243704]: 2025-12-13 04:34:27.757 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:34:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4199315453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:28 compute-0 nova_compute[243704]: 2025-12-13 04:34:28.289 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:34:28 compute-0 nova_compute[243704]: 2025-12-13 04:34:28.297 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:34:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1935729577' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:28 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4199315453' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:34:28 compute-0 nova_compute[243704]: 2025-12-13 04:34:28.419 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:34:28 compute-0 nova_compute[243704]: 2025-12-13 04:34:28.444 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:34:28 compute-0 nova_compute[243704]: 2025-12-13 04:34:28.445 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.810s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:29 compute-0 podman[284602]: 2025-12-13 04:34:29.00423351 +0000 UTC m=+0.137655263 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec 13 04:34:29 compute-0 ceph-mon[75071]: pgmap v2006: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.441 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.455 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.876 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:34:29 compute-0 nova_compute[243704]: 2025-12-13 04:34:29.889 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:34:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:30 compute-0 nova_compute[243704]: 2025-12-13 04:34:30.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:30 compute-0 nova_compute[243704]: 2025-12-13 04:34:30.878 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:31 compute-0 ceph-mon[75071]: pgmap v2007: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:32 compute-0 nova_compute[243704]: 2025-12-13 04:34:32.777 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:32 compute-0 nova_compute[243704]: 2025-12-13 04:34:32.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:33 compute-0 ceph-mon[75071]: pgmap v2008: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:33 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.917654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600473917752, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1078, "num_deletes": 251, "total_data_size": 1489497, "memory_usage": 1510200, "flush_reason": "Manual Compaction"}
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600473929272, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 955418, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 39005, "largest_seqno": 40082, "table_properties": {"data_size": 951063, "index_size": 1879, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 11406, "raw_average_key_size": 21, "raw_value_size": 941664, "raw_average_value_size": 1734, "num_data_blocks": 84, "num_entries": 543, "num_filter_entries": 543, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765600386, "oldest_key_time": 1765600386, "file_creation_time": 1765600473, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 11684 microseconds, and 7061 cpu microseconds.
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.929335) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 955418 bytes OK
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.929364) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.931106) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.931139) EVENT_LOG_v1 {"time_micros": 1765600473931133, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.931160) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1484436, prev total WAL file size 1484436, number of live WAL files 2.
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.932018) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031323533' seq:72057594037927935, type:22 .. '6D6772737461740031353034' seq:0, type:0; will stop at (end)
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(933KB)], [80(11MB)]
Dec 13 04:34:33 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600473932083, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 13535571, "oldest_snapshot_seqno": -1}
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 7261 keys, 10661533 bytes, temperature: kUnknown
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600474028317, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 10661533, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10609398, "index_size": 32887, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18181, "raw_key_size": 182726, "raw_average_key_size": 25, "raw_value_size": 10475700, "raw_average_value_size": 1442, "num_data_blocks": 1307, "num_entries": 7261, "num_filter_entries": 7261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765600473, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.028763) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 10661533 bytes
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.031144) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.4 rd, 110.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.0 +0.0 blob) out(10.2 +0.0 blob), read-write-amplify(25.3) write-amplify(11.2) OK, records in: 7744, records dropped: 483 output_compression: NoCompression
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.031178) EVENT_LOG_v1 {"time_micros": 1765600474031163, "job": 46, "event": "compaction_finished", "compaction_time_micros": 96398, "compaction_time_cpu_micros": 29177, "output_level": 6, "num_output_files": 1, "total_output_size": 10661533, "num_input_records": 7744, "num_output_records": 7261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600474031736, "job": 46, "event": "table_file_deletion", "file_number": 82}
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600474035878, "job": 46, "event": "table_file_deletion", "file_number": 80}
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:33.931897) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.035984) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.035991) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.035996) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.036000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:34:34.036005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:34:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:34 compute-0 nova_compute[243704]: 2025-12-13 04:34:34.458 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:34 compute-0 nova_compute[243704]: 2025-12-13 04:34:34.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:34 compute-0 ceph-mon[75071]: pgmap v2009: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:35.114 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:34:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:35.115 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:34:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:34:35.116 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:34:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:36 compute-0 nova_compute[243704]: 2025-12-13 04:34:36.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:36 compute-0 nova_compute[243704]: 2025-12-13 04:34:36.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:36 compute-0 nova_compute[243704]: 2025-12-13 04:34:36.877 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:34:37 compute-0 ceph-mon[75071]: pgmap v2010: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:37 compute-0 nova_compute[243704]: 2025-12-13 04:34:37.781 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:34:38 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 33K writes, 123K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 12K syncs, 2.63 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4327 writes, 12K keys, 4327 commit groups, 1.0 writes per commit group, ingest: 12.35 MB, 0.02 MB/s
                                           Interval WAL: 4327 writes, 1891 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:34:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:38 compute-0 nova_compute[243704]: 2025-12-13 04:34:38.873 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:34:38 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:39 compute-0 sudo[284628]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:34:39 compute-0 sudo[284628]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:39 compute-0 sudo[284628]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:39 compute-0 sudo[284653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:34:39 compute-0 sudo[284653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:39 compute-0 nova_compute[243704]: 2025-12-13 04:34:39.459 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:39 compute-0 ceph-mon[75071]: pgmap v2011: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:39 compute-0 sudo[284653]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:34:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:34:39 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:40 compute-0 sudo[284710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:34:40 compute-0 sudo[284710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:40 compute-0 sudo[284710]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:40 compute-0 sudo[284736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:34:40 compute-0 sudo[284736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:40 compute-0 podman[284734]: 2025-12-13 04:34:40.152994635 +0000 UTC m=+0.083707617 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:40 compute-0 podman[284792]: 2025-12-13 04:34:40.399081606 +0000 UTC m=+0.030462619 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:34:40
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.meta', 'volumes', 'images', 'backups']
Dec 13 04:34:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:34:40 compute-0 podman[284792]: 2025-12-13 04:34:40.700885821 +0000 UTC m=+0.332266774 container create d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:34:40 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:34:40 compute-0 systemd[1]: Started libpod-conmon-d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6.scope.
Dec 13 04:34:40 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:40 compute-0 podman[284792]: 2025-12-13 04:34:40.946003036 +0000 UTC m=+0.577384029 container init d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Dec 13 04:34:40 compute-0 podman[284792]: 2025-12-13 04:34:40.960412597 +0000 UTC m=+0.591793600 container start d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Dec 13 04:34:40 compute-0 podman[284792]: 2025-12-13 04:34:40.965928447 +0000 UTC m=+0.597309440 container attach d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec 13 04:34:40 compute-0 nifty_benz[284808]: 167 167
Dec 13 04:34:40 compute-0 systemd[1]: libpod-d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6.scope: Deactivated successfully.
Dec 13 04:34:41 compute-0 podman[284813]: 2025-12-13 04:34:41.042355895 +0000 UTC m=+0.049736993 container died d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:34:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2daec5cb2798edd5655b2a46168242cdf88231602fdfbfb69994b9f12de25b3c-merged.mount: Deactivated successfully.
Dec 13 04:34:41 compute-0 podman[284813]: 2025-12-13 04:34:41.156074117 +0000 UTC m=+0.163455165 container remove d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle)
Dec 13 04:34:41 compute-0 systemd[1]: libpod-conmon-d17155766e195ddae3659f06934367fd7ef90532772e2344851d6052371c78a6.scope: Deactivated successfully.
Dec 13 04:34:41 compute-0 podman[284835]: 2025-12-13 04:34:41.40668422 +0000 UTC m=+0.044873861 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:41 compute-0 podman[284835]: 2025-12-13 04:34:41.547466019 +0000 UTC m=+0.185655610 container create ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:41 compute-0 systemd[1]: Started libpod-conmon-ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83.scope.
Dec 13 04:34:41 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:41 compute-0 podman[284835]: 2025-12-13 04:34:41.636639092 +0000 UTC m=+0.274828733 container init ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:34:41 compute-0 podman[284835]: 2025-12-13 04:34:41.653333286 +0000 UTC m=+0.291522887 container start ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Dec 13 04:34:41 compute-0 podman[284835]: 2025-12-13 04:34:41.657358636 +0000 UTC m=+0.295548237 container attach ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Dec 13 04:34:41 compute-0 ceph-mon[75071]: pgmap v2012: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:42 compute-0 modest_johnson[284852]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:34:42 compute-0 modest_johnson[284852]: --> All data devices are unavailable
Dec 13 04:34:42 compute-0 systemd[1]: libpod-ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83.scope: Deactivated successfully.
Dec 13 04:34:42 compute-0 podman[284835]: 2025-12-13 04:34:42.248251702 +0000 UTC m=+0.886441333 container died ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:34:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:34:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-b8c2d2145e55632640f165ebd786ddd01c65a70192a699acfdba760c97e62b8c-merged.mount: Deactivated successfully.
Dec 13 04:34:42 compute-0 podman[284835]: 2025-12-13 04:34:42.445130554 +0000 UTC m=+1.083320125 container remove ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec 13 04:34:42 compute-0 systemd[1]: libpod-conmon-ccd818c1f13867d56b4e82a1fb9172cf28db6699d154bd491f249572f1205c83.scope: Deactivated successfully.
Dec 13 04:34:42 compute-0 sudo[284736]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:42 compute-0 sudo[284885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:34:42 compute-0 sudo[284885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:42 compute-0 sudo[284885]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:42 compute-0 sudo[284910]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:34:42 compute-0 sudo[284910]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:42 compute-0 nova_compute[243704]: 2025-12-13 04:34:42.813 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:42.921218888 +0000 UTC m=+0.027312724 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:34:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:34:43 compute-0 ceph-mon[75071]: pgmap v2013: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:43.57889448 +0000 UTC m=+0.684988336 container create 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Dec 13 04:34:43 compute-0 systemd[1]: Started libpod-conmon-2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1.scope.
Dec 13 04:34:43 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:43.729433882 +0000 UTC m=+0.835527788 container init 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:43.742680022 +0000 UTC m=+0.848773878 container start 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:43.748558012 +0000 UTC m=+0.854651938 container attach 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Dec 13 04:34:43 compute-0 stupefied_bhaskara[284965]: 167 167
Dec 13 04:34:43 compute-0 systemd[1]: libpod-2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1.scope: Deactivated successfully.
Dec 13 04:34:43 compute-0 podman[284947]: 2025-12-13 04:34:43.751779119 +0000 UTC m=+0.857872945 container died 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:43 compute-0 podman[284961]: 2025-12-13 04:34:43.775092644 +0000 UTC m=+0.152187750 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd)
Dec 13 04:34:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbc2b39ba9a3285bde547f19d70e4357a23bf4555f5ae510b6c67eb6de38c7c5-merged.mount: Deactivated successfully.
Dec 13 04:34:44 compute-0 podman[284947]: 2025-12-13 04:34:44.056961327 +0000 UTC m=+1.163055173 container remove 2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stupefied_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Dec 13 04:34:44 compute-0 systemd[1]: libpod-conmon-2b82c5da12c5717725983be8449c5561edee99f52d845d443ea751ae65449fd1.scope: Deactivated successfully.
Dec 13 04:34:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.30751912 +0000 UTC m=+0.047505763 container create c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:34:44 compute-0 systemd[1]: Started libpod-conmon-c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b.scope.
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.291074762 +0000 UTC m=+0.031061425 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:44 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca265f81214a48f9a164aa264b2034584d4c875a98ddfe66baa288179b097bf3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca265f81214a48f9a164aa264b2034584d4c875a98ddfe66baa288179b097bf3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca265f81214a48f9a164aa264b2034584d4c875a98ddfe66baa288179b097bf3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca265f81214a48f9a164aa264b2034584d4c875a98ddfe66baa288179b097bf3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:44 compute-0 nova_compute[243704]: 2025-12-13 04:34:44.462 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.531636693 +0000 UTC m=+0.271623426 container init c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.538103339 +0000 UTC m=+0.278090022 container start c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.553278882 +0000 UTC m=+0.293265625 container attach c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:34:44 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.6 total, 600.0 interval
                                           Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.69 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3926 writes, 12K keys, 3926 commit groups, 1.0 writes per commit group, ingest: 16.91 MB, 0.03 MB/s
                                           Interval WAL: 3926 writes, 1702 syncs, 2.31 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]: {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     "0": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "devices": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "/dev/loop3"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             ],
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_name": "ceph_lv0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_size": "21470642176",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "name": "ceph_lv0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "tags": {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_name": "ceph",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.crush_device_class": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.encrypted": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.objectstore": "bluestore",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_id": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.vdo": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.with_tpm": "0"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             },
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "vg_name": "ceph_vg0"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         }
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     ],
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     "1": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "devices": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "/dev/loop4"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             ],
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_name": "ceph_lv1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_size": "21470642176",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "name": "ceph_lv1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "tags": {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_name": "ceph",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.crush_device_class": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.encrypted": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.objectstore": "bluestore",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_id": "1",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.vdo": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.with_tpm": "0"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             },
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "vg_name": "ceph_vg1"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         }
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     ],
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     "2": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "devices": [
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "/dev/loop5"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             ],
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_name": "ceph_lv2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_size": "21470642176",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "name": "ceph_lv2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "tags": {
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.cluster_name": "ceph",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.crush_device_class": "",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.encrypted": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.objectstore": "bluestore",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osd_id": "2",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.vdo": "0",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:                 "ceph.with_tpm": "0"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             },
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "type": "block",
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:             "vg_name": "ceph_vg2"
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:         }
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]:     ]
Dec 13 04:34:44 compute-0 hungry_bardeen[285025]: }
Dec 13 04:34:44 compute-0 systemd[1]: libpod-c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b.scope: Deactivated successfully.
Dec 13 04:34:44 compute-0 podman[285008]: 2025-12-13 04:34:44.867446133 +0000 UTC m=+0.607432846 container died c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:34:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca265f81214a48f9a164aa264b2034584d4c875a98ddfe66baa288179b097bf3-merged.mount: Deactivated successfully.
Dec 13 04:34:45 compute-0 podman[285008]: 2025-12-13 04:34:45.029170289 +0000 UTC m=+0.769156932 container remove c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_bardeen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:34:45 compute-0 systemd[1]: libpod-conmon-c29326eb5b080cdc3ccbd72321d86bda0dcaebc61e71eb6f9812d145e912847b.scope: Deactivated successfully.
Dec 13 04:34:45 compute-0 sudo[284910]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:45 compute-0 sudo[285046]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:34:45 compute-0 sudo[285046]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:45 compute-0 sudo[285046]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:45 compute-0 sudo[285071]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:34:45 compute-0 sudo[285071]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.560335361 +0000 UTC m=+0.052072037 container create 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:34:45 compute-0 systemd[1]: Started libpod-conmon-9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb.scope.
Dec 13 04:34:45 compute-0 ceph-mon[75071]: pgmap v2014: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.538112047 +0000 UTC m=+0.029848723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.64929272 +0000 UTC m=+0.141029376 container init 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.661091641 +0000 UTC m=+0.152828307 container start 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.664479072 +0000 UTC m=+0.156215708 container attach 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:34:45 compute-0 optimistic_stonebraker[285125]: 167 167
Dec 13 04:34:45 compute-0 systemd[1]: libpod-9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb.scope: Deactivated successfully.
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.667878225 +0000 UTC m=+0.159614861 container died 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec 13 04:34:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-393d3fa78f56dc43cd976a6dcd1621ddc84236488f562cbd18bdc3aee17dd822-merged.mount: Deactivated successfully.
Dec 13 04:34:45 compute-0 podman[285109]: 2025-12-13 04:34:45.719299143 +0000 UTC m=+0.211035769 container remove 9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:34:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:34:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447419550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:34:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2447419550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:45 compute-0 systemd[1]: libpod-conmon-9ddcc4b6588d24bfa4ab69a706fc81ee860d5238fa9f006661245951a13fc1fb.scope: Deactivated successfully.
Dec 13 04:34:45 compute-0 podman[285149]: 2025-12-13 04:34:45.894317721 +0000 UTC m=+0.056811585 container create f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:34:45 compute-0 systemd[1]: Started libpod-conmon-f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868.scope.
Dec 13 04:34:45 compute-0 podman[285149]: 2025-12-13 04:34:45.873813754 +0000 UTC m=+0.036307608 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:34:45 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/616b41fee147a598b7d9ac6c63034d87a2ded9a42328da4fee83b085920a4346/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/616b41fee147a598b7d9ac6c63034d87a2ded9a42328da4fee83b085920a4346/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/616b41fee147a598b7d9ac6c63034d87a2ded9a42328da4fee83b085920a4346/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/616b41fee147a598b7d9ac6c63034d87a2ded9a42328da4fee83b085920a4346/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:34:46 compute-0 podman[285149]: 2025-12-13 04:34:46.00495941 +0000 UTC m=+0.167453244 container init f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Dec 13 04:34:46 compute-0 podman[285149]: 2025-12-13 04:34:46.01635606 +0000 UTC m=+0.178849884 container start f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True)
Dec 13 04:34:46 compute-0 podman[285149]: 2025-12-13 04:34:46.020157283 +0000 UTC m=+0.182651097 container attach f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:34:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2447419550' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:34:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/2447419550' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:34:46 compute-0 lvm[285245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:34:46 compute-0 lvm[285245]: VG ceph_vg0 finished
Dec 13 04:34:46 compute-0 lvm[285244]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:34:46 compute-0 lvm[285244]: VG ceph_vg1 finished
Dec 13 04:34:46 compute-0 lvm[285247]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:34:46 compute-0 lvm[285247]: VG ceph_vg2 finished
Dec 13 04:34:46 compute-0 clever_kepler[285166]: {}
Dec 13 04:34:46 compute-0 systemd[1]: libpod-f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868.scope: Deactivated successfully.
Dec 13 04:34:46 compute-0 systemd[1]: libpod-f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868.scope: Consumed 1.515s CPU time.
Dec 13 04:34:46 compute-0 podman[285149]: 2025-12-13 04:34:46.925319692 +0000 UTC m=+1.087813626 container died f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:34:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-616b41fee147a598b7d9ac6c63034d87a2ded9a42328da4fee83b085920a4346-merged.mount: Deactivated successfully.
Dec 13 04:34:46 compute-0 podman[285149]: 2025-12-13 04:34:46.992354645 +0000 UTC m=+1.154848469 container remove f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Dec 13 04:34:47 compute-0 systemd[1]: libpod-conmon-f9bfa8d2e27b87006a8828d939529ad2c263eae9ef111be190d5fdc3314ae868.scope: Deactivated successfully.
Dec 13 04:34:47 compute-0 sudo[285071]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:34:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:34:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:47 compute-0 sudo[285262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:34:47 compute-0 sudo[285262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:34:47 compute-0 sudo[285262]: pam_unix(sudo:session): session closed for user root
Dec 13 04:34:47 compute-0 ceph-mon[75071]: pgmap v2015: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:47 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:34:47 compute-0 nova_compute[243704]: 2025-12-13 04:34:47.817 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:49 compute-0 ceph-mon[75071]: pgmap v2016: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:49 compute-0 nova_compute[243704]: 2025-12-13 04:34:49.465 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:34:50 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.6 total, 600.0 interval
                                           Cumulative writes: 23K writes, 92K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8414 syncs, 2.78 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2976 writes, 8358 keys, 2976 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s
                                           Interval WAL: 2976 writes, 1338 syncs, 2.22 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:34:51 compute-0 ceph-mon[75071]: pgmap v2017: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.87706149250857e-06 of space, bias 1.0, pg target 0.001163118447752571 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029119565445456007 of space, bias 1.0, pg target 0.8735869633636802 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4623700694785716e-06 of space, bias 1.0, pg target 0.0007387110208435715 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.20001079449781242 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3115583487985934e-06 of space, bias 4.0, pg target 0.001573870018558312 quantized to 16 (current 16)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:34:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:34:52 compute-0 nova_compute[243704]: 2025-12-13 04:34:52.820 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:53 compute-0 ceph-mon[75071]: pgmap v2018: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:54 compute-0 nova_compute[243704]: 2025-12-13 04:34:54.468 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:54 compute-0 ovn_controller[145204]: 2025-12-13T04:34:54Z|00286|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec 13 04:34:55 compute-0 ceph-mon[75071]: pgmap v2019: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:57 compute-0 ceph-mgr[75360]: [devicehealth INFO root] Check health
Dec 13 04:34:57 compute-0 nova_compute[243704]: 2025-12-13 04:34:57.822 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:34:57 compute-0 ceph-mon[75071]: pgmap v2020: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:34:58 compute-0 ceph-mon[75071]: pgmap v2021: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:34:59 compute-0 nova_compute[243704]: 2025-12-13 04:34:59.470 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:00 compute-0 podman[285287]: 2025-12-13 04:35:00.00332503 +0000 UTC m=+0.134039565 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec 13 04:35:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:01 compute-0 ceph-mon[75071]: pgmap v2022: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:02 compute-0 nova_compute[243704]: 2025-12-13 04:35:02.877 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:04 compute-0 ceph-mon[75071]: pgmap v2023: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:04 compute-0 nova_compute[243704]: 2025-12-13 04:35:04.472 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:05 compute-0 ceph-mon[75071]: pgmap v2024: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:06 compute-0 sshd-session[285311]: Received disconnect from 193.46.255.159 port 46630:11:  [preauth]
Dec 13 04:35:06 compute-0 sshd-session[285311]: Disconnected from authenticating user root 193.46.255.159 port 46630 [preauth]
Dec 13 04:35:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:07 compute-0 nova_compute[243704]: 2025-12-13 04:35:07.880 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:07 compute-0 ceph-mon[75071]: pgmap v2025: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:08 compute-0 ceph-mon[75071]: pgmap v2026: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:09 compute-0 nova_compute[243704]: 2025-12-13 04:35:09.475 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:10 compute-0 podman[285313]: 2025-12-13 04:35:10.952223269 +0000 UTC m=+0.077737965 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:35:11 compute-0 ceph-mon[75071]: pgmap v2027: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:12 compute-0 nova_compute[243704]: 2025-12-13 04:35:12.883 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:12 compute-0 ceph-mon[75071]: pgmap v2028: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:13 compute-0 podman[285334]: 2025-12-13 04:35:13.962969986 +0000 UTC m=+0.093602617 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec 13 04:35:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:14 compute-0 nova_compute[243704]: 2025-12-13 04:35:14.477 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:15 compute-0 ceph-mon[75071]: pgmap v2029: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:17 compute-0 ceph-mon[75071]: pgmap v2030: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:17 compute-0 nova_compute[243704]: 2025-12-13 04:35:17.909 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:19 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:19 compute-0 ceph-mon[75071]: pgmap v2031: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:19 compute-0 nova_compute[243704]: 2025-12-13 04:35:19.478 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:20 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:21 compute-0 ceph-mon[75071]: pgmap v2032: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:22 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:22 compute-0 nova_compute[243704]: 2025-12-13 04:35:22.911 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:23 compute-0 ceph-mon[75071]: pgmap v2033: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:24 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:24 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:24 compute-0 nova_compute[243704]: 2025-12-13 04:35:24.480 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:25 compute-0 ceph-mon[75071]: pgmap v2034: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:26 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:27 compute-0 ceph-mon[75071]: pgmap v2035: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.903 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.903 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.904 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.904 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.905 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:35:27 compute-0 nova_compute[243704]: 2025-12-13 04:35:27.927 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:28 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:28 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:35:28 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4058308309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.445 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.698 243708 WARNING nova.virt.libvirt.driver [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.700 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4302MB free_disk=59.988048671744764GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.700 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.701 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.855 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.855 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.880 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing inventories for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.896 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating ProviderTree inventory for provider 36c11063-1199-4cbe-b01b-7185aae56a2a from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.897 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Updating inventory in ProviderTree for provider 36c11063-1199-4cbe-b01b-7185aae56a2a with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.912 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing aggregate associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.936 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Refreshing trait associations for resource provider 36c11063-1199-4cbe-b01b-7185aae56a2a, traits: COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NODE,HW_CPU_X86_BMI2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_ACCELERATORS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_CLMUL,HW_CPU_X86_AESNI,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSSE3,COMPUTE_TRUSTED_CERTS,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_BMI,COMPUTE_STORAGE_BUS_FDC,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_NE2K_PCI,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SVM,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SSE42,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX,HW_CPU_X86_SSE,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_MMX,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSE2,HW_CPU_X86_ABM,COMPUTE_STORAGE_BUS_USB,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_DEVICE_TAGGING,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_AVX2,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE41,HW_CPU_X86_FMA3,HW_CPU_X86_SSE4A _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Dec 13 04:35:28 compute-0 nova_compute[243704]: 2025-12-13 04:35:28.953 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Dec 13 04:35:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:29 compute-0 ceph-mon[75071]: pgmap v2036: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:29 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4058308309' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.482 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:29 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Dec 13 04:35:29 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1759609272' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.549 243708 DEBUG oslo_concurrency.processutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.556 243708 DEBUG nova.compute.provider_tree [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed in ProviderTree for provider: 36c11063-1199-4cbe-b01b-7185aae56a2a update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.575 243708 DEBUG nova.scheduler.client.report [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Inventory has not changed for provider 36c11063-1199-4cbe-b01b-7185aae56a2a based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.578 243708 DEBUG nova.compute.resource_tracker [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Dec 13 04:35:29 compute-0 nova_compute[243704]: 2025-12-13 04:35:29.578 243708 DEBUG oslo_concurrency.lockutils [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:35:30 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:30 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1759609272' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.575 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.575 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.576 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.576 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.591 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Dec 13 04:35:30 compute-0 nova_compute[243704]: 2025-12-13 04:35:30.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:30 compute-0 podman[285398]: 2025-12-13 04:35:30.977800789 +0000 UTC m=+0.112281825 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec 13 04:35:31 compute-0 ceph-mon[75071]: pgmap v2037: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:32 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:32 compute-0 nova_compute[243704]: 2025-12-13 04:35:32.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:32 compute-0 nova_compute[243704]: 2025-12-13 04:35:32.983 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:32 compute-0 ceph-mon[75071]: pgmap v2038: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:33 compute-0 nova_compute[243704]: 2025-12-13 04:35:33.879 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:34 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:34 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:34 compute-0 nova_compute[243704]: 2025-12-13 04:35:34.484 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:35:35.116 154842 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Dec 13 04:35:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:35:35.117 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Dec 13 04:35:35 compute-0 ovn_metadata_agent[154810]: 2025-12-13 04:35:35.117 154842 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Dec 13 04:35:35 compute-0 ceph-mon[75071]: pgmap v2039: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:36 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:36 compute-0 nova_compute[243704]: 2025-12-13 04:35:36.876 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:36 compute-0 nova_compute[243704]: 2025-12-13 04:35:36.877 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:36 compute-0 nova_compute[243704]: 2025-12-13 04:35:36.878 243708 DEBUG nova.compute.manager [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Dec 13 04:35:37 compute-0 ceph-mon[75071]: pgmap v2040: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:37 compute-0 nova_compute[243704]: 2025-12-13 04:35:37.878 243708 DEBUG oslo_service.periodic_task [None req-3778fd1f-0803-4567-aa84-7d034308571d - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Dec 13 04:35:37 compute-0 nova_compute[243704]: 2025-12-13 04:35:37.985 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:38 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:38 compute-0 sshd-session[285424]: Accepted publickey for zuul from 192.168.122.10 port 35648 ssh2: ECDSA SHA256:gTlqJAr50xkxS6KaZyqK+bSgkAys+fEy4kuYx4sZurA
Dec 13 04:35:38 compute-0 systemd-logind[796]: New session 52 of user zuul.
Dec 13 04:35:38 compute-0 systemd[1]: Started Session 52 of User zuul.
Dec 13 04:35:39 compute-0 sshd-session[285424]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Dec 13 04:35:39 compute-0 sudo[285428]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Dec 13 04:35:39 compute-0 sudo[285428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Dec 13 04:35:39 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:39 compute-0 ceph-mon[75071]: pgmap v2041: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:39 compute-0 nova_compute[243704]: 2025-12-13 04:35:39.486 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Optimize plan auto_2025-12-13_04:35:40
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: [balancer INFO root] do_upmap
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: [balancer INFO root] pools ['images', 'default.rgw.control', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log']
Dec 13 04:35:40 compute-0 ceph-mgr[75360]: [balancer INFO root] prepared 0/10 upmap changes
Dec 13 04:35:41 compute-0 ceph-mon[75071]: pgmap v2042: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:41 compute-0 podman[285581]: 2025-12-13 04:35:41.948688046 +0000 UTC m=+0.086956345 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19198 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:35:42 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19200 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:42 compute-0 nova_compute[243704]: 2025-12-13 04:35:42.987 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: vms, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: volumes, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: backups, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:35:43 compute-0 ceph-mgr[75360]: [rbd_support INFO root] load_schedules: images, start_after=
Dec 13 04:35:43 compute-0 ceph-mon[75071]: pgmap v2043: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:43 compute-0 ceph-mon[75071]: from='client.19198 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:43 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Dec 13 04:35:43 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/5015574' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 04:35:44 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:44 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:44 compute-0 podman[285698]: 2025-12-13 04:35:44.288636795 +0000 UTC m=+0.082541155 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec 13 04:35:44 compute-0 ceph-mon[75071]: from='client.19200 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:44 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/5015574' entity='client.admin' cmd={"prefix": "status"} : dispatch
Dec 13 04:35:44 compute-0 nova_compute[243704]: 2025-12-13 04:35:44.489 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:45 compute-0 ceph-mon[75071]: pgmap v2044: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Dec 13 04:35:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1015089110' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:35:45 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Dec 13 04:35:45 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1015089110' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:35:46 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1015089110' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Dec 13 04:35:46 compute-0 ceph-mon[75071]: from='client.? 192.168.122.10:0/1015089110' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Dec 13 04:35:47 compute-0 sudo[285770]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:35:47 compute-0 sudo[285770]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:47 compute-0 sudo[285770]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:47 compute-0 ceph-mon[75071]: pgmap v2045: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:47 compute-0 sudo[285795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Dec 13 04:35:47 compute-0 sudo[285795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:47 compute-0 sudo[285795]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:35:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:47 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:35:47 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:47 compute-0 sudo[285841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:35:47 compute-0 sudo[285841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:47 compute-0 sudo[285841]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:47 compute-0 sudo[285866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Dec 13 04:35:47 compute-0 sudo[285866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:47 compute-0 nova_compute[243704]: 2025-12-13 04:35:47.989 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:48 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:48 compute-0 sudo[285866]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:35:48 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:48 compute-0 sudo[285926]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:35:48 compute-0 sudo[285926]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:48 compute-0 sudo[285926]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Dec 13 04:35:48 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:48 compute-0 sudo[285954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Dec 13 04:35:48 compute-0 sudo[285954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:49 compute-0 ovs-vsctl[286004]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.134407563 +0000 UTC m=+0.029055871 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:49 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.394654449 +0000 UTC m=+0.289302727 container create a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.399650) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549399707, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 894, "num_deletes": 259, "total_data_size": 1226144, "memory_usage": 1244784, "flush_reason": "Manual Compaction"}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549415102, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 1197718, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40083, "largest_seqno": 40976, "table_properties": {"data_size": 1193263, "index_size": 2107, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9747, "raw_average_key_size": 19, "raw_value_size": 1184202, "raw_average_value_size": 2326, "num_data_blocks": 94, "num_entries": 509, "num_filter_entries": 509, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765600474, "oldest_key_time": 1765600474, "file_creation_time": 1765600549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 15523 microseconds, and 7919 cpu microseconds.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.415166) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 1197718 bytes OK
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.415193) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.417494) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.417519) EVENT_LOG_v1 {"time_micros": 1765600549417511, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.417544) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 1221739, prev total WAL file size 1221739, number of live WAL files 2.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.418570) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323630' seq:72057594037927935, type:22 .. '6C6F676D0031353135' seq:0, type:0; will stop at (end)
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(1169KB)], [83(10MB)]
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549418625, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 11859251, "oldest_snapshot_seqno": -1}
Dec 13 04:35:49 compute-0 systemd[1]: Started libpod-conmon-a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec.scope.
Dec 13 04:35:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:49 compute-0 nova_compute[243704]: 2025-12-13 04:35:49.491 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 7240 keys, 11699051 bytes, temperature: kUnknown
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549543007, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 11699051, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11645608, "index_size": 34259, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 183227, "raw_average_key_size": 25, "raw_value_size": 11510780, "raw_average_value_size": 1589, "num_data_blocks": 1364, "num_entries": 7240, "num_filter_entries": 7240, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1765597391, "oldest_key_time": 0, "file_creation_time": 1765600549, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "ab3b6fd1-54ac-4374-b2c1-3b89a2e471e2", "db_session_id": "20WVHNV90XXY2OOY7BGG", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.543736) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 11699051 bytes
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.548299) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 95.0 rd, 93.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 10.2 +0.0 blob) out(11.2 +0.0 blob), read-write-amplify(19.7) write-amplify(9.8) OK, records in: 7770, records dropped: 530 output_compression: NoCompression
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.548383) EVENT_LOG_v1 {"time_micros": 1765600549548353, "job": 48, "event": "compaction_finished", "compaction_time_micros": 124835, "compaction_time_cpu_micros": 59050, "output_level": 6, "num_output_files": 1, "total_output_size": 11699051, "num_input_records": 7770, "num_output_records": 7240, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549549465, "job": 48, "event": "table_file_deletion", "file_number": 85}
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.550740862 +0000 UTC m=+0.445389240 container init a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1765600549554479, "job": 48, "event": "table_file_deletion", "file_number": 83}
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.418446) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.554580) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.554587) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.554589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.554591) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 ceph-mon[75071]: rocksdb: (Original Log Time 2025/12/13-04:35:49.554593) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.563656154 +0000 UTC m=+0.458304472 container start a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.568329231 +0000 UTC m=+0.462977549 container attach a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Dec 13 04:35:49 compute-0 great_blackburn[286066]: 167 167
Dec 13 04:35:49 compute-0 systemd[1]: libpod-a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec.scope: Deactivated successfully.
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.573667836 +0000 UTC m=+0.468316144 container died a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:35:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-327d2e6122e31f3e2de6f9c33a17f4415248678af5e79f47087d52e09994b6dd-merged.mount: Deactivated successfully.
Dec 13 04:35:49 compute-0 podman[286016]: 2025-12-13 04:35:49.627106049 +0000 UTC m=+0.521754357 container remove a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:35:49 compute-0 systemd[1]: libpod-conmon-a71aa6929ede8348378e8e8e7028f8c4fc9881ebfd5802708600b21a4dd31eec.scope: Deactivated successfully.
Dec 13 04:35:49 compute-0 ceph-mon[75071]: pgmap v2046: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:35:49 compute-0 podman[286093]: 2025-12-13 04:35:49.868869952 +0000 UTC m=+0.068151544 container create 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Dec 13 04:35:49 compute-0 systemd[1]: Started libpod-conmon-4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9.scope.
Dec 13 04:35:49 compute-0 podman[286093]: 2025-12-13 04:35:49.841270031 +0000 UTC m=+0.040551653 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:49 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:49 compute-0 podman[286093]: 2025-12-13 04:35:49.974790072 +0000 UTC m=+0.174071674 container init 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True)
Dec 13 04:35:49 compute-0 podman[286093]: 2025-12-13 04:35:49.997018076 +0000 UTC m=+0.196299678 container start 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Dec 13 04:35:50 compute-0 podman[286093]: 2025-12-13 04:35:50.000901821 +0000 UTC m=+0.200183443 container attach 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Dec 13 04:35:50 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:35:50 compute-0 virtqemud[243450]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec 13 04:35:50 compute-0 virtqemud[243450]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec 13 04:35:50 compute-0 virtqemud[243450]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec 13 04:35:50 compute-0 cranky_kowalevski[286113]: --> passed data devices: 0 physical, 3 LVM
Dec 13 04:35:50 compute-0 cranky_kowalevski[286113]: --> All data devices are unavailable
Dec 13 04:35:50 compute-0 systemd[1]: libpod-4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9.scope: Deactivated successfully.
Dec 13 04:35:50 compute-0 podman[286093]: 2025-12-13 04:35:50.583656765 +0000 UTC m=+0.782938397 container died 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:35:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-c0180601079b1f7c825488d70a1d6c505859a1d18de3b624e20f81d09a775cc3-merged.mount: Deactivated successfully.
Dec 13 04:35:50 compute-0 podman[286093]: 2025-12-13 04:35:50.635579918 +0000 UTC m=+0.834861510 container remove 4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=cranky_kowalevski, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:50 compute-0 systemd[1]: libpod-conmon-4a81a01b9549336a2f16ad5fd2f6264c2da1c8f1e278ea16a1bf304d4e3e6bb9.scope: Deactivated successfully.
Dec 13 04:35:50 compute-0 sudo[285954]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:50 compute-0 sudo[286288]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:35:50 compute-0 sudo[286288]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:50 compute-0 sudo[286288]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:50 compute-0 sudo[286338]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- lvm list --format json
Dec 13 04:35:50 compute-0 sudo[286338]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:50 compute-0 ceph-mon[75071]: pgmap v2047: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:35:50 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: cache status {prefix=cache status} (starting...)
Dec 13 04:35:51 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: client ls {prefix=client ls} (starting...)
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.150291231 +0000 UTC m=+0.050874064 container create cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Dec 13 04:35:51 compute-0 systemd[1]: Started libpod-conmon-cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567.scope.
Dec 13 04:35:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.126167586 +0000 UTC m=+0.026750459 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.231448617 +0000 UTC m=+0.132031450 container init cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.250248929 +0000 UTC m=+0.150831812 container start cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.253876527 +0000 UTC m=+0.154459380 container attach cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Dec 13 04:35:51 compute-0 charming_babbage[286511]: 167 167
Dec 13 04:35:51 compute-0 systemd[1]: libpod-cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567.scope: Deactivated successfully.
Dec 13 04:35:51 compute-0 conmon[286511]: conmon cec6b3e30ccd924d496b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567.scope/container/memory.events
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.261471234 +0000 UTC m=+0.162054067 container died cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:51 compute-0 lvm[286535]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:35:51 compute-0 lvm[286535]: VG ceph_vg1 finished
Dec 13 04:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-0cc490d9c1b462b662d4fb5a381cf9461775a2eec74f3a9aa7713c609d604b8e-merged.mount: Deactivated successfully.
Dec 13 04:35:51 compute-0 podman[286476]: 2025-12-13 04:35:51.312994085 +0000 UTC m=+0.213576928 container remove cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_babbage, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:51 compute-0 systemd[1]: libpod-conmon-cec6b3e30ccd924d496b222f8f16585aded70e9bb08dd9307fbadc2b52676567.scope: Deactivated successfully.
Dec 13 04:35:51 compute-0 lvm[286571]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:35:51 compute-0 lvm[286571]: VG ceph_vg2 finished
Dec 13 04:35:51 compute-0 lvm[286583]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:35:51 compute-0 lvm[286583]: VG ceph_vg0 finished
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.496416422 +0000 UTC m=+0.053086885 container create 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Dec 13 04:35:51 compute-0 systemd[1]: Started libpod-conmon-058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd.scope.
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.472858102 +0000 UTC m=+0.029528575 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:51 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3024dc78b58208654a2a214d80d29d77fd5cffa86a5d39d7a728fe69ab67a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3024dc78b58208654a2a214d80d29d77fd5cffa86a5d39d7a728fe69ab67a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3024dc78b58208654a2a214d80d29d77fd5cffa86a5d39d7a728fe69ab67a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/edf3024dc78b58208654a2a214d80d29d77fd5cffa86a5d39d7a728fe69ab67a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:51 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19208 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.60558822 +0000 UTC m=+0.162258683 container init 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.613538406 +0000 UTC m=+0.170208869 container start 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.618903682 +0000 UTC m=+0.175574145 container attach 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:35:51 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: damage ls {prefix=damage ls} (starting...)
Dec 13 04:35:51 compute-0 silly_solomon[286601]: {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     "0": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "devices": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "/dev/loop3"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             ],
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_name": "ceph_lv0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_size": "21470642176",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=4d39c086-933d-4bdc-977c-ec02bb2f333b,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "name": "ceph_lv0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "path": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "tags": {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_uuid": "4Kk8y2-vSfC-I18m-NXRe-s1t9-HwSD-ytpRc2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_name": "ceph",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.crush_device_class": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.encrypted": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.objectstore": "bluestore",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_fsid": "4d39c086-933d-4bdc-977c-ec02bb2f333b",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_id": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.vdo": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.with_tpm": "0"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             },
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "vg_name": "ceph_vg0"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         }
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     ],
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     "1": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "devices": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "/dev/loop4"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             ],
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_name": "ceph_lv1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_size": "21470642176",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=83e0191b-e5d2-4854-84b3-247b63096122,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "name": "ceph_lv1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "path": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "tags": {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_uuid": "me8cZ1-Z6Lc-Ox0D-6Hb7-Kr5A-X9EP-rKBG5i",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_name": "ceph",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.crush_device_class": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.encrypted": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.objectstore": "bluestore",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_fsid": "83e0191b-e5d2-4854-84b3-247b63096122",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_id": "1",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.vdo": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.with_tpm": "0"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             },
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "vg_name": "ceph_vg1"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         }
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     ],
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     "2": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "devices": [
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "/dev/loop5"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             ],
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_name": "ceph_lv2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_size": "21470642176",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437a9f04-06b7-56e3-8a4b-f52a1199dd32,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f6f41095-5d06-4c49-86a2-78e3159dd7dc,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "lv_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "name": "ceph_lv2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "path": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "tags": {
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.block_uuid": "0oUM1f-XGyN-i1Fv-31TC-1uib-zhxH-rNx2gw",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cephx_lockbox_secret": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_fsid": "437a9f04-06b7-56e3-8a4b-f52a1199dd32",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.cluster_name": "ceph",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.crush_device_class": "",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.encrypted": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.objectstore": "bluestore",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_fsid": "f6f41095-5d06-4c49-86a2-78e3159dd7dc",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osd_id": "2",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.osdspec_affinity": "default_drive_group",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.vdo": "0",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:                 "ceph.with_tpm": "0"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             },
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "type": "block",
Dec 13 04:35:51 compute-0 silly_solomon[286601]:             "vg_name": "ceph_vg2"
Dec 13 04:35:51 compute-0 silly_solomon[286601]:         }
Dec 13 04:35:51 compute-0 silly_solomon[286601]:     ]
Dec 13 04:35:51 compute-0 silly_solomon[286601]: }
Dec 13 04:35:51 compute-0 systemd[1]: libpod-058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd.scope: Deactivated successfully.
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.950747874 +0000 UTC m=+0.507418337 container died 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Dec 13 04:35:51 compute-0 ceph-mon[75071]: from='client.19208 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-edf3024dc78b58208654a2a214d80d29d77fd5cffa86a5d39d7a728fe69ab67a-merged.mount: Deactivated successfully.
Dec 13 04:35:51 compute-0 podman[286573]: 2025-12-13 04:35:51.996511349 +0000 UTC m=+0.553181812 container remove 058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=silly_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Dec 13 04:35:52 compute-0 systemd[1]: libpod-conmon-058a3491973c8f1a84fe55868834dc520e4688b3fea294109d4acfc7025ce6dd.scope: Deactivated successfully.
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump loads {prefix=dump loads} (starting...)
Dec 13 04:35:52 compute-0 sudo[286338]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19210 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:52 compute-0 sudo[286681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Dec 13 04:35:52 compute-0 sudo[286681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:52 compute-0 sudo[286681]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:35:52 compute-0 sudo[286714]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/437a9f04-06b7-56e3-8a4b-f52a1199dd32/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 437a9f04-06b7-56e3-8a4b-f52a1199dd32 -- raw list --format json
Dec 13 04:35:52 compute-0 sudo[286714]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.52105355 +0000 UTC m=+0.052872419 container create eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Dec 13 04:35:52 compute-0 systemd[1]: Started libpod-conmon-eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005.scope.
Dec 13 04:35:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.503331897 +0000 UTC m=+0.035150786 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.604311823 +0000 UTC m=+0.136130722 container init eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19212 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.611582401 +0000 UTC m=+0.143401270 container start eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.616450064 +0000 UTC m=+0.148268943 container attach eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec 13 04:35:52 compute-0 reverent_goldwasser[286825]: 167 167
Dec 13 04:35:52 compute-0 systemd[1]: libpod-eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005.scope: Deactivated successfully.
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.618859279 +0000 UTC m=+0.150678158 container died eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Dec 13 04:35:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3721484a8c190bbb9b4d86280b90c9d62772450b92c1a80fa7a107cac009b5b-merged.mount: Deactivated successfully.
Dec 13 04:35:52 compute-0 podman[286810]: 2025-12-13 04:35:52.668154939 +0000 UTC m=+0.199973808 container remove eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_goldwasser, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Dec 13 04:35:52 compute-0 systemd[1]: libpod-conmon-eff591c752406aca407fc7698d00542ce9d30e65a1a52d506be7054d1badf005.scope: Deactivated successfully.
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] _maybe_adjust
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 3.87706149250857e-06 of space, bias 1.0, pg target 0.001163118447752571 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0029119565445456007 of space, bias 1.0, pg target 0.8735869633636802 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 2.4623700694785716e-06 of space, bias 1.0, pg target 0.0007387110208435715 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006667026483260414 of space, bias 1.0, pg target 0.20001079449781242 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.3115583487985934e-06 of space, bias 4.0, pg target 0.001573870018558312 quantized to 16 (current 16)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Dec 13 04:35:52 compute-0 ceph-mgr[75360]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Dec 13 04:35:52 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Dec 13 04:35:52 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/814455315' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 04:35:52 compute-0 podman[286875]: 2025-12-13 04:35:52.854029933 +0000 UTC m=+0.049575019 container create 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Dec 13 04:35:52 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Dec 13 04:35:52 compute-0 systemd[1]: Started libpod-conmon-5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9.scope.
Dec 13 04:35:52 compute-0 systemd[1]: Started libcrun container.
Dec 13 04:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8db5d6f51804274bcf88f1816988befc2b7d9e53f55df6f12f145dda3b9b2f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8db5d6f51804274bcf88f1816988befc2b7d9e53f55df6f12f145dda3b9b2f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8db5d6f51804274bcf88f1816988befc2b7d9e53f55df6f12f145dda3b9b2f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8db5d6f51804274bcf88f1816988befc2b7d9e53f55df6f12f145dda3b9b2f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Dec 13 04:35:52 compute-0 podman[286875]: 2025-12-13 04:35:52.836195858 +0000 UTC m=+0.031740964 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Dec 13 04:35:52 compute-0 podman[286875]: 2025-12-13 04:35:52.945742986 +0000 UTC m=+0.141288082 container init 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Dec 13 04:35:52 compute-0 podman[286875]: 2025-12-13 04:35:52.954830834 +0000 UTC m=+0.150375930 container start 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Dec 13 04:35:52 compute-0 podman[286875]: 2025-12-13 04:35:52.960115388 +0000 UTC m=+0.155660504 container attach 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Dec 13 04:35:52 compute-0 ceph-mon[75071]: from='client.19210 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:52 compute-0 ceph-mon[75071]: pgmap v2048: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:35:52 compute-0 ceph-mon[75071]: from='client.19212 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:52 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/814455315' entity='client.admin' cmd={"prefix": "report"} : dispatch
Dec 13 04:35:52 compute-0 nova_compute[243704]: 2025-12-13 04:35:52.991 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:53 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: get subtrees {prefix=get subtrees} (starting...)
Dec 13 04:35:53 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19216 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:53 compute-0 ceph-mgr[75360]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:35:53 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: 2025-12-13T04:35:53.087+0000 7f4cb924f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:35:53 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: ops {prefix=ops} (starting...)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886752865' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:53 compute-0 lvm[287052]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Dec 13 04:35:53 compute-0 lvm[287052]: VG ceph_vg0 finished
Dec 13 04:35:53 compute-0 lvm[287054]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Dec 13 04:35:53 compute-0 lvm[287054]: VG ceph_vg1 finished
Dec 13 04:35:53 compute-0 lvm[287056]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Dec 13 04:35:53 compute-0 lvm[287056]: VG ceph_vg2 finished
Dec 13 04:35:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/899308025' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 04:35:53 compute-0 determined_cori[286898]: {}
Dec 13 04:35:53 compute-0 systemd[1]: libpod-5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9.scope: Deactivated successfully.
Dec 13 04:35:53 compute-0 systemd[1]: libpod-5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9.scope: Consumed 1.233s CPU time.
Dec 13 04:35:53 compute-0 podman[286875]: 2025-12-13 04:35:53.742424667 +0000 UTC m=+0.937969763 container died 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Dec 13 04:35:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-e8db5d6f51804274bcf88f1816988befc2b7d9e53f55df6f12f145dda3b9b2f4-merged.mount: Deactivated successfully.
Dec 13 04:35:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3802520457' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 04:35:53 compute-0 podman[286875]: 2025-12-13 04:35:53.793034573 +0000 UTC m=+0.988579649 container remove 5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_cori, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Dec 13 04:35:53 compute-0 systemd[1]: libpod-conmon-5c9580d7287b83a04444b70a708fb351a2ee5d30a7b81cd8930a5a89f7abc3d9.scope: Deactivated successfully.
Dec 13 04:35:53 compute-0 sudo[286714]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:53 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Dec 13 04:35:53 compute-0 ceph-mon[75071]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:53 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: session ls {prefix=session ls} (starting...)
Dec 13 04:35:53 compute-0 sudo[287100]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Dec 13 04:35:53 compute-0 sudo[287100]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Dec 13 04:35:53 compute-0 sudo[287100]: pam_unix(sudo:session): session closed for user root
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='client.19216 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1886752865' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/899308025' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3802520457' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:53 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' 
Dec 13 04:35:54 compute-0 ceph-mds[95635]: mds.cephfs.compute-0.bszvvn asok_command: status {prefix=status} (starting...)
Dec 13 04:35:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 04:35:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1259383946' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:35:54 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Dec 13 04:35:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2608246701' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 04:35:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:54 compute-0 nova_compute[243704]: 2025-12-13 04:35:54.491 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:54 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 04:35:54 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4188527197' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:35:54 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19230 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1259383946' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: pgmap v2049: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2608246701' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4188527197' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: from='client.19230 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 04:35:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1017491776' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:35:55 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19234 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:55 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 04:35:55 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3764086782' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Dec 13 04:35:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/826842780' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Dec 13 04:35:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1989311250' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1017491776' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: from='client.19234 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3764086782' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/826842780' entity='client.admin' cmd={"prefix": "features"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Dec 13 04:35:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3502492166' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 04:35:56 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 04:35:56 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2062194662' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:35:57 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19248 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:57 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19246 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:57 compute-0 ceph-mgr[75360]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 04:35:57 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: 2025-12-13T04:35:57.212+0000 7f4cb924f640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Dec 13 04:35:57 compute-0 ceph-mon[75071]: pgmap v2050: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1989311250' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Dec 13 04:35:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3502492166' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Dec 13 04:35:57 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2062194662' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:35:57 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19250 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:57 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Dec 13 04:35:57 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555539242' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:02.122331+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1348022 data_alloc: 218103808 data_used: 6833246
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105414656 unmapped: 13336576 heap: 118751232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:03.122609+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 189 handle_osd_map epochs [190,190], i have 189, src has [1,190]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e2685f2800 session 0x55e268142380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105431040 unmapped: 13320192 heap: 118751232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 heartbeat osd_stat(store_statfs(0x4fa559000/0x0/0x4ffc00000, data 0x1815235/0x1931000, compress 0x0/0x0/0x0, omap 0x2196c, meta 0x3d4e694), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:04.122752+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105431040 unmapped: 13320192 heap: 118751232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:05.122912+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e2685f2c00 session 0x55e26814cfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e268fb8000 session 0x55e26817afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e268fb9400 session 0x55e265b8afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e2650d8400 session 0x55e26817b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 heartbeat osd_stat(store_statfs(0x4fa559000/0x0/0x4ffc00000, data 0x1815235/0x1931000, compress 0x0/0x0/0x0, omap 0x2196c, meta 0x3d4e694), peers [0,1] op hist [0,0,0,0,0,0,4])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 ms_handle_reset con 0x55e2685f2800 session 0x55e26814ca80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 190 handle_osd_map epochs [190,191], i have 190, src has [1,191]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 191 ms_handle_reset con 0x55e2685f2c00 session 0x55e268142540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 191 ms_handle_reset con 0x55e268fb8000 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 191 ms_handle_reset con 0x55e268fb9800 session 0x55e26674b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 191 ms_handle_reset con 0x55e2650d8400 session 0x55e26818afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 106258432 unmapped: 16171008 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:06.123112+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 191 handle_osd_map epochs [191,192], i have 191, src has [1,192]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 192 ms_handle_reset con 0x55e2685f2800 session 0x55e265b8ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 16646144 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:07.123282+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1409731 data_alloc: 218103808 data_used: 6833246
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105783296 unmapped: 16646144 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:08.123604+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 16637952 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:09.123785+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 192 ms_handle_reset con 0x55e2685f2c00 session 0x55e267f87c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.299763680s of 12.464449883s, submitted: 48
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 192 ms_handle_reset con 0x55e268fb8000 session 0x55e267fdec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105791488 unmapped: 16637952 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 192 handle_osd_map epochs [193,193], i have 192, src has [1,193]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:10.123945+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 193 heartbeat osd_stat(store_statfs(0x4f9ea3000/0x0/0x4ffc00000, data 0x1ec55cf/0x1fe7000, compress 0x0/0x0/0x0, omap 0x22188, meta 0x3d4de78), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105472000 unmapped: 16957440 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 193 ms_handle_reset con 0x55e268fb9c00 session 0x55e26674a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:11.124093+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 193 handle_osd_map epochs [193,194], i have 193, src has [1,194]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105496576 unmapped: 16932864 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 194 ms_handle_reset con 0x55e2650d8400 session 0x55e26818ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 194 ms_handle_reset con 0x55e2685f2800 session 0x55e26674b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:12.124260+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1411021 data_alloc: 218103808 data_used: 6833246
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 194 handle_osd_map epochs [194,195], i have 194, src has [1,195]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 16908288 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 195 ms_handle_reset con 0x55e2685f2c00 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:13.124400+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 195 ms_handle_reset con 0x55e2661cf800 session 0x55e2683bf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 195 handle_osd_map epochs [196,196], i have 195, src has [1,196]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108666880 unmapped: 13762560 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:14.124596+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 196 ms_handle_reset con 0x55e268015800 session 0x55e2683bf340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108797952 unmapped: 13631488 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:15.124854+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 196 handle_osd_map epochs [197,197], i have 196, src has [1,197]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108847104 unmapped: 13582336 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 197 handle_osd_map epochs [197,198], i have 197, src has [1,198]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f9e96000/0x0/0x4ffc00000, data 0x1ecc4f5/0x1ff2000, compress 0x0/0x0/0x0, omap 0x22b56, meta 0x3d4d4aa), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:16.125021+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 13541376 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:17.125195+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1462741 data_alloc: 234881024 data_used: 13480760
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 13541376 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:18.125390+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108888064 unmapped: 13541376 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:19.125527+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 198 heartbeat osd_stat(store_statfs(0x4f9e95000/0x0/0x4ffc00000, data 0x1ecdf90/0x1ff5000, compress 0x0/0x0/0x0, omap 0x22ecc, meta 0x3d4d134), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 13508608 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:20.125709+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 13508608 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:21.125869+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 198 handle_osd_map epochs [199,199], i have 198, src has [1,199]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.860397339s of 12.254673004s, submitted: 58
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 199 ms_handle_reset con 0x55e2650d8400 session 0x55e2683bea80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108920832 unmapped: 13508608 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:22.126002+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1467511 data_alloc: 234881024 data_used: 13481958
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 108937216 unmapped: 13492224 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 199 ms_handle_reset con 0x55e2661cf800 session 0x55e26818ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:23.126182+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 109961216 unmapped: 12468224 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 199 handle_osd_map epochs [199,200], i have 199, src has [1,200]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 200 ms_handle_reset con 0x55e268015800 session 0x55e2680eca80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:24.126317+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 200 ms_handle_reset con 0x55e2685f2800 session 0x55e2683be700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 200 heartbeat osd_stat(store_statfs(0x4f998a000/0x0/0x4ffc00000, data 0x23d0738/0x24fa000, compress 0x0/0x0/0x0, omap 0x23450, meta 0x3d4cbb0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 111165440 unmapped: 11264000 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:25.126563+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 111960064 unmapped: 10469376 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:26.126786+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 111960064 unmapped: 10469376 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 200 handle_osd_map epochs [201,201], i have 200, src has [1,201]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f98ff000/0x0/0x4ffc00000, data 0x245b738/0x2585000, compress 0x0/0x0/0x0, omap 0x23450, meta 0x3d4cbb0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:27.126906+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1511269 data_alloc: 234881024 data_used: 13555174
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 10420224 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:28.127083+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 10420224 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:29.127209+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112009216 unmapped: 10420224 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:30.127373+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f98fa000/0x0/0x4ffc00000, data 0x245d1d3/0x2588000, compress 0x0/0x0/0x0, omap 0x2365d, meta 0x3d4c9a3), peers [0,1] op hist [0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112140288 unmapped: 10289152 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:31.127574+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 201 heartbeat osd_stat(store_statfs(0x4f98e5000/0x0/0x4ffc00000, data 0x247c1d3/0x25a7000, compress 0x0/0x0/0x0, omap 0x2365d, meta 0x3d4c9a3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 201 handle_osd_map epochs [202,202], i have 201, src has [1,202]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.821157455s of 10.040208817s, submitted: 114
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 10280960 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:32.127727+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510355 data_alloc: 234881024 data_used: 13559883
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 10280960 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:33.127891+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112148480 unmapped: 10280960 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:34.128056+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112164864 unmapped: 10264576 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:35.129021+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f98e0000/0x0/0x4ffc00000, data 0x247dc52/0x25aa000, compress 0x0/0x0/0x0, omap 0x2398d, meta 0x3d4c673), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112173056 unmapped: 10256384 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:36.129254+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f98e0000/0x0/0x4ffc00000, data 0x247dc52/0x25aa000, compress 0x0/0x0/0x0, omap 0x2398d, meta 0x3d4c673), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112222208 unmapped: 10207232 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:37.129504+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1510319 data_alloc: 234881024 data_used: 13559899
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112230400 unmapped: 10199040 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:38.129702+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 10043392 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:39.129842+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 202 ms_handle_reset con 0x55e2685f2c00 session 0x55e2680ec000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112386048 unmapped: 10043392 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:40.129960+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 202 heartbeat osd_stat(store_statfs(0x4f96c1000/0x0/0x4ffc00000, data 0x269de52/0x27cb000, compress 0x0/0x0/0x0, omap 0x2398d, meta 0x3d4c673), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112254976 unmapped: 10174464 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:41.130101+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 202 handle_osd_map epochs [203,203], i have 202, src has [1,203]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 203 ms_handle_reset con 0x55e2650d8400 session 0x55e2680ece00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 10158080 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:42.130228+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1526023 data_alloc: 234881024 data_used: 13551707
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 112271360 unmapped: 10158080 heap: 122429440 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 203 ms_handle_reset con 0x55e2661cf800 session 0x55e26817a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:43.130348+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 203 handle_osd_map epochs [203,204], i have 203, src has [1,204]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.692866325s of 11.770788193s, submitted: 69
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 204 ms_handle_reset con 0x55e268015800 session 0x55e268101340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685f2800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 204 ms_handle_reset con 0x55e268014000 session 0x55e26817ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 204 ms_handle_reset con 0x55e2685f2800 session 0x55e26817b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138166272 unmapped: 5906432 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 204 ms_handle_reset con 0x55e2650d8400 session 0x55e265a04540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:44.130462+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23756800 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 204 handle_osd_map epochs [204,205], i have 204, src has [1,205]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 205 ms_handle_reset con 0x55e2661cf800 session 0x55e26814c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 205 heartbeat osd_stat(store_statfs(0x4f85e0000/0x0/0x4ffc00000, data 0x394c58a/0x38ac000, compress 0x0/0x0/0x0, omap 0x242ce, meta 0x3d4bd32), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:45.130621+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119144448 unmapped: 24928256 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:46.130750+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 205 handle_osd_map epochs [206,206], i have 205, src has [1,206]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120193024 unmapped: 23879680 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:47.130854+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1656795 data_alloc: 234881024 data_used: 16550491
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120242176 unmapped: 23830528 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:48.130969+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e268015800 session 0x55e26817b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e268014000 session 0x55e267f87dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f85d6000/0x0/0x4ffc00000, data 0x377fd32/0x38b2000, compress 0x0/0x0/0x0, omap 0x247ca, meta 0x3d4b836), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 23789568 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:49.131097+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120283136 unmapped: 23789568 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:50.131208+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f85d6000/0x0/0x4ffc00000, data 0x377fd32/0x38b2000, compress 0x0/0x0/0x0, omap 0x249a2, meta 0x3d4b65e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23756800 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:51.131347+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 heartbeat osd_stat(store_statfs(0x4f85d6000/0x0/0x4ffc00000, data 0x377fd32/0x38b2000, compress 0x0/0x0/0x0, omap 0x249a2, meta 0x3d4b65e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23756800 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:52.131490+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1656795 data_alloc: 234881024 data_used: 16550491
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120315904 unmapped: 23756800 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:53.131572+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2685e1800 session 0x55e26818afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2650d8400 session 0x55e267fdfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2661cf800 session 0x55e267fdfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 23748608 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:54.131684+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.435568810s of 10.701330185s, submitted: 28
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e268015800 session 0x55e268142fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e268014000 session 0x55e267f2bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687be400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2687be400 session 0x55e26817a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2650d8400 session 0x55e26817a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 ms_handle_reset con 0x55e2661cf800 session 0x55e26674a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27607040 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:55.131841+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116465664 unmapped: 27607040 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:56.132135+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116473856 unmapped: 27598848 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e267d4ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:57.132276+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f85d9000/0x0/0x4ffc00000, data 0x377fd42/0x38b3000, compress 0x0/0x0/0x0, omap 0x24bf6, meta 0x3d4b40a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e265b388c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81c00 session 0x55e26818bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1689288 data_alloc: 234881024 data_used: 16550491
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2650d8400 session 0x55e268100a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2661cf800 session 0x55e265b8b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e2683bf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 27443200 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:58.132929+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 27443200 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7fc4000/0x0/0x4ffc00000, data 0x3d917c1/0x3ec6000, compress 0x0/0x0/0x0, omap 0x250f4, meta 0x3d4af0c), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:59.133148+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 27443200 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:00.133449+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e265b38a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 27131904 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:01.133630+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116940800 unmapped: 27131904 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:02.133741+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1710141 data_alloc: 234881024 data_used: 19336811
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 17956864 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:03.133872+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81000 session 0x55e267d4b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7fa1000/0x0/0x4ffc00000, data 0x3db57e4/0x3eeb000, compress 0x0/0x0/0x0, omap 0x25283, meta 0x3d4ad7d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:04.133999+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.746115685s of 10.035443306s, submitted: 73
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:05.134184+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:06.134303+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:07.134420+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1746218 data_alloc: 234881024 data_used: 25153643
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:08.134564+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7fa1000/0x0/0x4ffc00000, data 0x3db57e4/0x3eeb000, compress 0x0/0x0/0x0, omap 0x25283, meta 0x3d4ad7d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126271488 unmapped: 17801216 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:09.134660+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126304256 unmapped: 17768448 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:10.134787+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126304256 unmapped: 17768448 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:11.134907+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133316608 unmapped: 10756096 heap: 144072704 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81000 session 0x55e265b8afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:12.135029+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e265b8ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e265b8aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e265b8a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1874041 data_alloc: 251658240 data_used: 31013995
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c0c00 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e26818b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81000 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e26573dc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e2663961c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134643712 unmapped: 17309696 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:13.135147+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7102000/0x0/0x4ffc00000, data 0x4c52856/0x4d8a000, compress 0x0/0x0/0x0, omap 0x25283, meta 0x3d4ad7d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137560064 unmapped: 14393344 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:14.135276+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 13950976 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:15.135411+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 13950976 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:16.135534+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138002432 unmapped: 13950976 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:17.135681+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.938129425s of 13.081315041s, submitted: 41
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1905357 data_alloc: 251658240 data_used: 34863723
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c1000 session 0x55e268143c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138149888 unmapped: 13803520 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:18.135819+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7101000/0x0/0x4ffc00000, data 0x4c52879/0x4d8b000, compress 0x0/0x0/0x0, omap 0x25283, meta 0x3d4ad7d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138256384 unmapped: 13697024 heap: 151953408 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:19.135949+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 212992 heap: 153001984 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:20.136135+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152788992 unmapped: 212992 heap: 153001984 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:21.136270+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 154370048 unmapped: 729088 heap: 155099136 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:22.136422+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1993557 data_alloc: 268435456 data_used: 48115835
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e2667dd6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 153329664 unmapped: 1769472 heap: 155099136 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c1400 session 0x55e265b8b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:23.136575+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f6f9a000/0x0/0x4ffc00000, data 0x4db8879/0x4ef1000, compress 0x0/0x0/0x0, omap 0x25283, meta 0x3d4ad7d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 153550848 unmapped: 1548288 heap: 155099136 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:24.136697+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e267d4a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81000 session 0x55e26817b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c1400 session 0x55e26818b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144760832 unmapped: 11386880 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:25.136889+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 11591680 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:26.137075+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144556032 unmapped: 11591680 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:27.137196+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1845767 data_alloc: 251658240 data_used: 35064427
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 145539072 unmapped: 10608640 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:28.137313+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e2683be8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 145539072 unmapped: 10608640 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:29.137431+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7833000/0x0/0x4ffc00000, data 0x40f17e4/0x4227000, compress 0x0/0x0/0x0, omap 0x252c5, meta 0x3d4ad3b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 145539072 unmapped: 10608640 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:30.137589+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.849299431s of 13.000082016s, submitted: 47
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e2667dc540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81800 session 0x55e267f2ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81400 session 0x55e267d4afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 145604608 unmapped: 10543104 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:31.137731+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c1400 session 0x55e26674ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e267fdea80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144211968 unmapped: 11935744 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:32.137879+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81800 session 0x55e2659ad500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1829134 data_alloc: 251658240 data_used: 36926387
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144220160 unmapped: 11927552 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:33.138097+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268014000 session 0x55e267f2a540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144269312 unmapped: 11878400 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:34.138195+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e268015800 session 0x55e267f2aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 heartbeat osd_stat(store_statfs(0x4f7e5a000/0x0/0x4ffc00000, data 0x3efd7c1/0x4032000, compress 0x0/0x0/0x0, omap 0x257a7, meta 0x3d4a859), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e2668c1400 session 0x55e2683bfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144293888 unmapped: 11853824 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:35.138342+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c80c00 session 0x55e267f2ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 ms_handle_reset con 0x55e267c81800 session 0x55e26817ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144310272 unmapped: 11837440 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:36.138457+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 208 ms_handle_reset con 0x55e268014000 session 0x55e26674a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 208 heartbeat osd_stat(store_statfs(0x4f7e56000/0x0/0x4ffc00000, data 0x3eff3a1/0x4034000, compress 0x0/0x0/0x0, omap 0x25c62, meta 0x3d4a39e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 208 ms_handle_reset con 0x55e268015800 session 0x55e267f2bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 208 ms_handle_reset con 0x55e2668c1400 session 0x55e2667dc1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c80c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 22093824 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:37.138610+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e267c80c00 session 0x55e2662f9180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1678288 data_alloc: 234881024 data_used: 24393651
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e267c81800 session 0x55e2662f9c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:38.138746+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 22093824 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e268014000 session 0x55e26674bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e2650d8400 session 0x55e2659ac700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e268fb8000 session 0x55e2659aca80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e2661cf800 session 0x55e268142a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e269673000 session 0x55e26573c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:39.138864+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134053888 unmapped: 22093824 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e2650d8400 session 0x55e26674a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e2668c1400 session 0x55e2681428c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 ms_handle_reset con 0x55e2650d8400 session 0x55e267f2b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:40.139030+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 39247872 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:41.139201+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 39247872 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 209 handle_osd_map epochs [209,210], i have 209, src has [1,210]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.833610535s of 11.064741135s, submitted: 151
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:42.139337+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116899840 unmapped: 39247872 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2661cf800 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268014000 session 0x55e2683b6a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa31c000/0x0/0x4ffc00000, data 0x1a389f2/0x1b6e000, compress 0x0/0x0/0x0, omap 0x26a6c, meta 0x3d49594), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1469315 data_alloc: 218103808 data_used: 7119169
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268015800 session 0x55e26814c380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2668c1800 session 0x55e2680ec000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:43.139569+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 41181184 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:44.139763+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114966528 unmapped: 41181184 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:45.139981+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa51f000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26ce7, meta 0x3d49319), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:46.140231+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:47.140903+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452011 data_alloc: 218103808 data_used: 5017393
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:48.141108+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:49.141292+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:50.141464+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa51f000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26ce7, meta 0x3d49319), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:51.141607+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:52.141785+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452011 data_alloc: 218103808 data_used: 5017393
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:53.141938+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:54.142213+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:55.142658+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:56.143358+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa51f000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26ce7, meta 0x3d49319), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:57.143821+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452011 data_alloc: 218103808 data_used: 5017393
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:58.144341+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa51f000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26ce7, meta 0x3d49319), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:59.144553+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:00.144946+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:01.145213+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:02.145378+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1452011 data_alloc: 218103808 data_used: 5017393
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:03.145638+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 114982912 unmapped: 41164800 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa51f000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26ce7, meta 0x3d49319), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 22.394794464s of 22.531383514s, submitted: 67
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2650d8400 session 0x55e2683bfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:04.145961+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 41132032 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:05.146504+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 41132032 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:06.146699+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115015680 unmapped: 41132032 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2661cf800 session 0x55e2683be700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:07.146956+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 41107456 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2668c1800 session 0x55e2683b7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453176 data_alloc: 218103808 data_used: 5021391
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:08.147298+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 41107456 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa521000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x26e35, meta 0x3d491cb), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:09.147577+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268014000 session 0x55e267d4b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 41107456 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:10.147804+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 41107456 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:11.148006+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x271ba, meta 0x3d48e46), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268015800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268015800 session 0x55e26818a540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115040256 unmapped: 41107456 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2650d8400 session 0x55e267f2afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:12.148144+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454137 data_alloc: 218103808 data_used: 5021391
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:13.148377+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:14.148510+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x272db, meta 0x3d48d25), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:15.148709+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x272db, meta 0x3d48d25), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:16.149023+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:17.149322+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1454137 data_alloc: 218103808 data_used: 5021391
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:18.149630+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:19.149786+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:20.149912+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x272db, meta 0x3d48d25), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:21.150054+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115048448 unmapped: 41099264 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2661cf800 session 0x55e2683bf340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2668c1800 session 0x55e265b39180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:22.150207+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 39460864 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1453753 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:23.150384+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 39460864 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.804685593s of 19.684307098s, submitted: 36
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268014000 session 0x55e267fdfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:24.150602+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 39460864 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:25.150811+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116686848 unmapped: 39460864 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4fa522000/0x0/0x4ffc00000, data 0x1837780/0x196a000, compress 0x0/0x0/0x0, omap 0x27387, meta 0x3d48c79), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2668c1c00 session 0x55e268100e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:26.151279+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116719616 unmapped: 39428096 heap: 156147712 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2650d8400 session 0x55e26814dc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2668c1800 session 0x55e265b8ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:27.151690+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 46366720 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1503928 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e2661cf800 session 0x55e2683bf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:28.151940+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117129216 unmapped: 46366720 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 heartbeat osd_stat(store_statfs(0x4f9d93000/0x0/0x4ffc00000, data 0x1fc6780/0x20f9000, compress 0x0/0x0/0x0, omap 0x27527, meta 0x3d48ad9), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:29.152120+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117170176 unmapped: 46325760 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 ms_handle_reset con 0x55e268014000 session 0x55e268101c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 211 heartbeat osd_stat(store_statfs(0x4f9d93000/0x0/0x4ffc00000, data 0x1fc6780/0x20f9000, compress 0x0/0x0/0x0, omap 0x275a7, meta 0x3d48a59), peers [0,1] op hist [0,0,3,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 211 ms_handle_reset con 0x55e266798000 session 0x55e2667dda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:30.152366+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 47808512 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 211 ms_handle_reset con 0x55e266798400 session 0x55e26818a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:31.152883+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 47808512 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115687424 unmapped: 47808512 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:32.465797+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 ms_handle_reset con 0x55e2650d8400 session 0x55e2662f9dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1521758 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 heartbeat osd_stat(store_statfs(0x4f9bff000/0x0/0x4ffc00000, data 0x2153eb8/0x2289000, compress 0x0/0x0/0x0, omap 0x27b5d, meta 0x3d484a3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 47726592 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:33.466022+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 heartbeat osd_stat(store_statfs(0x4f9c00000/0x0/0x4ffc00000, data 0x2153ec8/0x228a000, compress 0x0/0x0/0x0, omap 0x27d31, meta 0x3d482cf), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115769344 unmapped: 47726592 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:34.466212+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.501827240s of 11.635064125s, submitted: 36
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 ms_handle_reset con 0x55e268014000 session 0x55e268101a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 ms_handle_reset con 0x55e2668c1800 session 0x55e2683b6c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115826688 unmapped: 47669248 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:35.466422+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 ms_handle_reset con 0x55e266798c00 session 0x55e26818b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116236288 unmapped: 47259648 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:36.466805+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266798800 session 0x55e2683bf880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266798c00 session 0x55e265b38fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f93c0000/0x0/0x4ffc00000, data 0x2992a64/0x2aca000, compress 0x0/0x0/0x0, omap 0x28120, meta 0x3d47ee0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:37.467004+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1581025 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:38.467103+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e2683b61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:39.467236+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266798400 session 0x55e268101180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:40.467351+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e2662f9180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e268143c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f93bf000/0x0/0x4ffc00000, data 0x2992a74/0x2acb000, compress 0x0/0x0/0x0, omap 0x28120, meta 0x3d47ee0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:41.467568+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:42.467783+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1582787 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:43.468016+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f93bf000/0x0/0x4ffc00000, data 0x2992a74/0x2acb000, compress 0x0/0x0/0x0, omap 0x28120, meta 0x3d47ee0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:44.468350+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:45.468687+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116260864 unmapped: 47235072 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:46.468925+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.843358994s of 11.918918610s, submitted: 22
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266798c00 session 0x55e26817a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116285440 unmapped: 47210496 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:47.469144+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1583387 data_alloc: 218103808 data_used: 6856399
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266798800 session 0x55e26674a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 47202304 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:48.469283+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2668c1800 session 0x55e26674b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f93c1000/0x0/0x4ffc00000, data 0x2992a74/0x2acb000, compress 0x0/0x0/0x0, omap 0x28120, meta 0x3d47ee0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:49.469442+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 47202304 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:50.469644+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116293632 unmapped: 47202304 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e267d4a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e2667dcfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:51.469780+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 47489024 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e268014000 session 0x55e265b8afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799000 session 0x55e265b38a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799400 session 0x55e26817a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799400 session 0x55e2683b7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e2662f8a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e265b8b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799000 session 0x55e267d4a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e268014000 session 0x55e267fdf880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e2683bee00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:52.469905+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 115941376 unmapped: 47554560 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1662935 data_alloc: 234881024 data_used: 15087839
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:53.470072+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 45555712 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f8fbd000/0x0/0x4ffc00000, data 0x2d94aa7/0x2ecf000, compress 0x0/0x0/0x0, omap 0x28332, meta 0x3d47cce), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:54.470217+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117940224 unmapped: 45555712 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e267f87500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799000 session 0x55e2680ec700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:55.470428+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 46194688 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799400 session 0x55e267f2a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:56.470578+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 46194688 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799800 session 0x55e2683b7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.059633255s of 10.131558418s, submitted: 24
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:57.470696+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117301248 unmapped: 46194688 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1671053 data_alloc: 234881024 data_used: 15919855
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2661cf800 session 0x55e267fdefc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:58.470794+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120463360 unmapped: 43032576 heap: 163495936 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799000 session 0x55e267fdec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799400 session 0x55e2680eddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799c00 session 0x55e2697a7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:59.470923+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121520128 unmapped: 45654016 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 heartbeat osd_stat(store_statfs(0x4f8fbc000/0x0/0x4ffc00000, data 0x2d94b09/0x2ed0000, compress 0x0/0x0/0x0, omap 0x28332, meta 0x3d47cce), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799800 session 0x55e267d4a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e2650d8400 session 0x55e268100a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:00.471080+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 46850048 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 ms_handle_reset con 0x55e266799800 session 0x55e26b760e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 213 handle_osd_map epochs [213,214], i have 213, src has [1,214]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798400 session 0x55e2697a6540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:01.471227+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 46850048 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:02.471381+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 46850048 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1717302 data_alloc: 234881024 data_used: 15919839
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f8982000/0x0/0x4ffc00000, data 0x33cc643/0x3508000, compress 0x0/0x0/0x0, omap 0x285ab, meta 0x3d47a55), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:03.471521+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120324096 unmapped: 46850048 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:04.471605+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126795776 unmapped: 40378368 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f7ce8000/0x0/0x4ffc00000, data 0x4889643/0x41a0000, compress 0x0/0x0/0x0, omap 0x291b3, meta 0x3d46e4d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:05.471733+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 40296448 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:06.471852+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 40296448 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:07.471998+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 40296448 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1882780 data_alloc: 234881024 data_used: 16953055
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:08.472115+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2661cf800 session 0x55e26b760700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126885888 unmapped: 40288256 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266799000 session 0x55e26b760c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266799000 session 0x55e2667dcc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2650d8400 session 0x55e267f2bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2661cf800 session 0x55e2681b3dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.180097580s of 11.565491676s, submitted: 142
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f7ce1000/0x0/0x4ffc00000, data 0x488b643/0x41a2000, compress 0x0/0x0/0x0, omap 0x291b3, meta 0x3d46e4d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798400 session 0x55e2681b3340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266799800 session 0x55e267f68fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2650d8400 session 0x55e2697a6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2661cf800 session 0x55e2674d7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798400 session 0x55e266397500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:09.472237+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 40304640 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f7660000/0x0/0x4ffc00000, data 0x4f14653/0x482c000, compress 0x0/0x0/0x0, omap 0x291b3, meta 0x3d46e4d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:10.472349+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 40304640 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:11.472428+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126869504 unmapped: 40304640 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798800 session 0x55e2681008c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798c00 session 0x55e267d4a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:12.472559+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126877696 unmapped: 40296448 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2650d8400 session 0x55e267f68c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1925058 data_alloc: 234881024 data_used: 16957167
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:13.472680+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 40271872 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:14.472829+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 40271872 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e2661cf800 session 0x55e267fdfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 ms_handle_reset con 0x55e266798400 session 0x55e26674bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:15.472996+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 40263680 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 heartbeat osd_stat(store_statfs(0x4f7661000/0x0/0x4ffc00000, data 0x4f14630/0x482b000, compress 0x0/0x0/0x0, omap 0x292f1, meta 0x3d46d0f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 214 handle_osd_map epochs [214,215], i have 214, src has [1,215]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 215 ms_handle_reset con 0x55e267c81400 session 0x55e26674b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 215 ms_handle_reset con 0x55e2668c0800 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:16.473167+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118972416 unmapped: 48201728 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 215 handle_osd_map epochs [215,216], i have 215, src has [1,216]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f8b34000/0x0/0x4ffc00000, data 0x321a210/0x3356000, compress 0x0/0x0/0x0, omap 0x29f50, meta 0x3d460b0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e2668c0400 session 0x55e26817ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e2650d8400 session 0x55e2667dd6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:17.473313+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e266798800 session 0x55e2667dc540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e266799000 session 0x55e2683b7a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125747200 unmapped: 41426944 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e2661cf800 session 0x55e267f69180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1645181 data_alloc: 234881024 data_used: 16809660
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e266799400 session 0x55e265a7e700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e266799c00 session 0x55e2683b7dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:18.473435+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 42090496 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 ms_handle_reset con 0x55e2650d8400 session 0x55e267f69340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:19.473576+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125083648 unmapped: 42090496 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f9ad6000/0x0/0x4ffc00000, data 0x2279f99/0x23b6000, compress 0x0/0x0/0x0, omap 0x2a3f1, meta 0x3d45c0f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.6 total, 600.0 interval
                                           Cumulative writes: 10K writes, 39K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 10K writes, 2887 syncs, 3.55 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4565 writes, 14K keys, 4565 commit groups, 1.0 writes per commit group, ingest: 10.80 MB, 0.02 MB/s
                                           Interval WAL: 4565 writes, 1980 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:20.473708+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 42082304 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 heartbeat osd_stat(store_statfs(0x4f9ad6000/0x0/0x4ffc00000, data 0x2279f99/0x23b6000, compress 0x0/0x0/0x0, omap 0x2a3f1, meta 0x3d45c0f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.963928223s of 12.203349113s, submitted: 128
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:21.473869+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 42082304 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:22.474000+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 42082304 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1647947 data_alloc: 234881024 data_used: 16809660
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 217 ms_handle_reset con 0x55e2661cf800 session 0x55e26b760380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:23.474108+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125091840 unmapped: 42082304 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e266798800 session 0x55e26674a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e266799000 session 0x55e26674b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:24.474262+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e2650d8400 session 0x55e267f2b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 125124608 unmapped: 42049536 heap: 167174144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e266799400 session 0x55e2681b3a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 heartbeat osd_stat(store_statfs(0x4fa507000/0x0/0x4ffc00000, data 0x1845630/0x1983000, compress 0x0/0x0/0x0, omap 0x2aa5e, meta 0x3d455a2), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:25.474440+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119693312 unmapped: 55877632 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:26.474561+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 56934400 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 heartbeat osd_stat(store_statfs(0x4f7d07000/0x0/0x4ffc00000, data 0x4045630/0x4183000, compress 0x0/0x0/0x0, omap 0x2aa5e, meta 0x3d455a2), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:27.474705+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118284288 unmapped: 57286656 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e266799c00 session 0x55e2681b3500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1778891 data_alloc: 218103808 data_used: 7384764
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 ms_handle_reset con 0x55e2668c0400 session 0x55e265a048c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:28.474865+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 57106432 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: mgrc ms_handle_reset ms_handle_reset con 0x55e2680d8800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3514601685
Dec 13 04:35:57 compute-0 ceph-osd[87731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3514601685,v1:192.168.122.100:6801/3514601685]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: get_auth_request con 0x55e267b9a000 auth_method 0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:29.475019+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118685696 unmapped: 56885248 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:30.475173+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117874688 unmapped: 57696256 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:31.475331+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.090247154s of 10.543658257s, submitted: 50
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117891072 unmapped: 57679872 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:32.475490+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 117956608 unmapped: 57614336 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f3504000/0x0/0x4ffc00000, data 0x88470af/0x8986000, compress 0x0/0x0/0x0, omap 0x2adb0, meta 0x3d45250), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 ms_handle_reset con 0x55e2650d8400 session 0x55e26817ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2141121 data_alloc: 218103808 data_used: 6861089
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:33.475656+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126402560 unmapped: 49168384 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f29f4000/0x0/0x4ffc00000, data 0x93590af/0x9498000, compress 0x0/0x0/0x0, omap 0x2adb0, meta 0x3d45250), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:34.475858+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126550016 unmapped: 49020928 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 ms_handle_reset con 0x55e266799000 session 0x55e2697a7dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:35.476064+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118235136 unmapped: 57335808 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 heartbeat osd_stat(store_statfs(0x4f01f4000/0x0/0x4ffc00000, data 0xbb590af/0xbc98000, compress 0x0/0x0/0x0, omap 0x2adb0, meta 0x3d45250), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:36.476239+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 57245696 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:37.476396+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 ms_handle_reset con 0x55e266799400 session 0x55e267f69dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118325248 unmapped: 57245696 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2515341 data_alloc: 218103808 data_used: 6861089
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 heartbeat osd_stat(store_statfs(0x4ef1f4000/0x0/0x4ffc00000, data 0xcb590af/0xcc98000, compress 0x0/0x0/0x0, omap 0x2af9c, meta 0x3d45064), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:38.476578+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118521856 unmapped: 57049088 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 220 ms_handle_reset con 0x55e266798400 session 0x55e267fdec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:39.476708+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 56958976 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:40.476857+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 56918016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 220 ms_handle_reset con 0x55e266799c00 session 0x55e26573c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:41.476987+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118530048 unmapped: 57040896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.593457222s of 10.558075905s, submitted: 35
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:42.477115+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118579200 unmapped: 56991744 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2857151 data_alloc: 218103808 data_used: 6861089
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 222 ms_handle_reset con 0x55e2650d8400 session 0x55e2680ed180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:43.477237+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118652928 unmapped: 56918016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 222 heartbeat osd_stat(store_statfs(0x4ea1e9000/0x0/0x4ffc00000, data 0x11b5e3d7/0x11ca1000, compress 0x0/0x0/0x0, omap 0x2b760, meta 0x3d448a0), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:44.477354+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118767616 unmapped: 56803328 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:45.477518+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118824960 unmapped: 56745984 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 223 ms_handle_reset con 0x55e2661cf800 session 0x55e267d4afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:46.477751+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 56729600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 223 ms_handle_reset con 0x55e266798400 session 0x55e267d4a540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:47.477899+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 223 ms_handle_reset con 0x55e266799000 session 0x55e265a05a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118841344 unmapped: 56729600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 223 handle_osd_map epochs [223,224], i have 223, src has [1,224]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3032547 data_alloc: 218103808 data_used: 6861674
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 224 ms_handle_reset con 0x55e267c81400 session 0x55e2680eddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 224 ms_handle_reset con 0x55e2650d8400 session 0x55e2697a7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:48.478026+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 224 ms_handle_reset con 0x55e266798400 session 0x55e268142000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 224 ms_handle_reset con 0x55e266799000 session 0x55e268101dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 118890496 unmapped: 56680448 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 ms_handle_reset con 0x55e2668c1000 session 0x55e2667dc700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 ms_handle_reset con 0x55e266799400 session 0x55e2681b36c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 ms_handle_reset con 0x55e2661cf800 session 0x55e2697a7180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:49.478197+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 ms_handle_reset con 0x55e266798400 session 0x55e267fde380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 heartbeat osd_stat(store_statfs(0x4fa1dc000/0x0/0x4ffc00000, data 0x1b63911/0x1cac000, compress 0x0/0x0/0x0, omap 0x2c1bd, meta 0x3d43e43), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 ms_handle_reset con 0x55e266799000 session 0x55e2680ec700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 56565760 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:50.478357+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119005184 unmapped: 56565760 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 225 handle_osd_map epochs [225,226], i have 225, src has [1,226]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 226 ms_handle_reset con 0x55e267c81400 session 0x55e268142c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:51.478523+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 226 ms_handle_reset con 0x55e2668c0c00 session 0x55e2681b3880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119013376 unmapped: 56557568 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:52.478658+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 56549376 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 226 heartbeat osd_stat(store_statfs(0x4fa1db000/0x0/0x4ffc00000, data 0x1b6556f/0x1caf000, compress 0x0/0x0/0x0, omap 0x2c636, meta 0x3d439ca), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1651610 data_alloc: 234881024 data_used: 10085823
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:53.478792+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 119021568 unmapped: 56549376 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.570174217s of 11.862899780s, submitted: 100
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 226 ms_handle_reset con 0x55e2661cf800 session 0x55e2680ec1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:54.478996+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 227 ms_handle_reset con 0x55e266799000 session 0x55e26817aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 120111104 unmapped: 55459840 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:55.479270+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 227 ms_handle_reset con 0x55e266798400 session 0x55e2663976c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 228 ms_handle_reset con 0x55e2668c0c00 session 0x55e2683b7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 54386688 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 228 handle_osd_map epochs [228,229], i have 228, src has [1,229]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:56.479401+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121184256 unmapped: 54386688 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:57.479950+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 229 ms_handle_reset con 0x55e2693ca000 session 0x55e2683b6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121323520 unmapped: 54247424 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1661191 data_alloc: 234881024 data_used: 10086875
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:58.480133+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 229 handle_osd_map epochs [229,230], i have 229, src has [1,230]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 230 ms_handle_reset con 0x55e266798400 session 0x55e2681b2700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121348096 unmapped: 54222848 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 230 heartbeat osd_stat(store_statfs(0x4fa1d4000/0x0/0x4ffc00000, data 0x1b6a7ce/0x1cb8000, compress 0x0/0x0/0x0, omap 0x2d42a, meta 0x3d42bd6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:59.480851+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 230 handle_osd_map epochs [230,231], i have 230, src has [1,231]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 ms_handle_reset con 0x55e2661cf800 session 0x55e26573d880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 ms_handle_reset con 0x55e266799000 session 0x55e2680ec8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 ms_handle_reset con 0x55e267c81400 session 0x55e26b7616c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 54198272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:00.481008+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 heartbeat osd_stat(store_statfs(0x4fa1c8000/0x0/0x4ffc00000, data 0x1b6e45a/0x1cc0000, compress 0x0/0x0/0x0, omap 0x2d9ab, meta 0x3d42655), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 121372672 unmapped: 54198272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 ms_handle_reset con 0x55e2693ca000 session 0x55e267f69c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:01.481166+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 50675712 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:02.481274+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 232 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124895232 unmapped: 50675712 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1750742 data_alloc: 234881024 data_used: 10776173
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 232 handle_osd_map epochs [232,233], i have 232, src has [1,233]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 ms_handle_reset con 0x55e266798400 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:03.481423+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 ms_handle_reset con 0x55e266799000 session 0x55e26818b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 ms_handle_reset con 0x55e2668c0c00 session 0x55e26818aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 heartbeat osd_stat(store_statfs(0x4f9537000/0x0/0x4ffc00000, data 0x27fe084/0x2953000, compress 0x0/0x0/0x0, omap 0x2e1bf, meta 0x3d41e41), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 51150848 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:04.481560+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.505780220s of 10.817337036s, submitted: 144
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 ms_handle_reset con 0x55e267c81400 session 0x55e267fdf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124436480 unmapped: 51134464 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:05.481769+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 234 ms_handle_reset con 0x55e266798400 session 0x55e2681b2fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 234 heartbeat osd_stat(store_statfs(0x4f9530000/0x0/0x4ffc00000, data 0x27ffc92/0x2958000, compress 0x0/0x0/0x0, omap 0x2e3b8, meta 0x3d41c48), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124452864 unmapped: 51118080 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 235 ms_handle_reset con 0x55e266799000 session 0x55e26674ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:06.481924+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 235 ms_handle_reset con 0x55e2668c0c00 session 0x55e26573c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 235 ms_handle_reset con 0x55e267c81400 session 0x55e26b761dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 51150848 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:07.482066+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 235 heartbeat osd_stat(store_statfs(0x4f952e000/0x0/0x4ffc00000, data 0x2801d20/0x295c000, compress 0x0/0x0/0x0, omap 0x2e829, meta 0x3d417d7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124420096 unmapped: 51150848 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1762346 data_alloc: 234881024 data_used: 10776173
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 235 handle_osd_map epochs [235,236], i have 235, src has [1,236]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 236 ms_handle_reset con 0x55e267c81400 session 0x55e2680ec540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:08.482271+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 236 ms_handle_reset con 0x55e2661cf800 session 0x55e2697a6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 51363840 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 236 handle_osd_map epochs [236,237], i have 236, src has [1,237]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:09.482462+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 237 ms_handle_reset con 0x55e266798400 session 0x55e26b760000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 237 ms_handle_reset con 0x55e266799000 session 0x55e2680ecc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 51363840 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 237 handle_osd_map epochs [237,238], i have 237, src has [1,238]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:10.482875+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e2668c0c00 session 0x55e2680ed500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e2661cf800 session 0x55e264087c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e266798400 session 0x55e2683be000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e266799000 session 0x55e2697a68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e267c81400 session 0x55e267f861c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124198912 unmapped: 51372032 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 ms_handle_reset con 0x55e2693ca400 session 0x55e26573d500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 heartbeat osd_stat(store_statfs(0x4f9525000/0x0/0x4ffc00000, data 0x280670c/0x2963000, compress 0x0/0x0/0x0, omap 0x2f232, meta 0x3d40dce), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 ms_handle_reset con 0x55e2661cf800 session 0x55e265b8ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 ms_handle_reset con 0x55e266798400 session 0x55e26b760c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 ms_handle_reset con 0x55e266799000 session 0x55e267f87dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 ms_handle_reset con 0x55e267c81400 session 0x55e2659addc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cac00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:11.482988+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 ms_handle_reset con 0x55e2693cac00 session 0x55e2681b2540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124715008 unmapped: 50855936 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:12.483147+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 240 ms_handle_reset con 0x55e2693ca800 session 0x55e26674ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 51683328 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1824745 data_alloc: 234881024 data_used: 10778869
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:13.483352+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 123904000 unmapped: 51666944 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 240 handle_osd_map epochs [240,241], i have 240, src has [1,241]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:14.483492+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 51658752 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:15.483651+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.432790756s of 10.785444260s, submitted: 133
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 123953152 unmapped: 51617792 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 242 handle_osd_map epochs [242,243], i have 242, src has [1,243]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:16.483839+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 243 ms_handle_reset con 0x55e266798400 session 0x55e267d4a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124362752 unmapped: 51208192 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 243 heartbeat osd_stat(store_statfs(0x4f8c78000/0x0/0x4ffc00000, data 0x30adc93/0x3210000, compress 0x0/0x0/0x0, omap 0x30a29, meta 0x3d3f5d7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 244 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2a540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c81400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:17.483960+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 244 ms_handle_reset con 0x55e2693cb000 session 0x55e26674a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 244 heartbeat osd_stat(store_statfs(0x4f8c49000/0x0/0x4ffc00000, data 0x30d9883/0x323d000, compress 0x0/0x0/0x0, omap 0x30e4f, meta 0x3d3f1b1), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 124444672 unmapped: 51126272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1843730 data_alloc: 234881024 data_used: 11399277
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:18.484143+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129736704 unmapped: 45834240 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:19.484355+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 244 ms_handle_reset con 0x55e2693cb400 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129736704 unmapped: 45834240 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:20.484502+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 44785664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:21.484685+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 44785664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:22.484821+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 44785664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1897016 data_alloc: 234881024 data_used: 16276690
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f8c49000/0x0/0x4ffc00000, data 0x30dcfd7/0x3241000, compress 0x0/0x0/0x0, omap 0x3157f, meta 0x3d3ea81), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:23.484981+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 44752896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:24.485134+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 44752896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:25.485357+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 44752896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 246 heartbeat osd_stat(store_statfs(0x4f8c49000/0x0/0x4ffc00000, data 0x30dcfd7/0x3241000, compress 0x0/0x0/0x0, omap 0x3157f, meta 0x3d3ea81), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 246 handle_osd_map epochs [247,247], i have 247, src has [1,247]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.179191589s of 10.745048523s, submitted: 221
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:26.485509+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 247 ms_handle_reset con 0x55e2661cf800 session 0x55e2674d6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130818048 unmapped: 44752896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:27.485675+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 247 handle_osd_map epochs [247,248], i have 247, src has [1,248]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 248 ms_handle_reset con 0x55e266798400 session 0x55e26573cc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130859008 unmapped: 44711936 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1907583 data_alloc: 234881024 data_used: 16276690
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:28.485792+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x30e07d6/0x324a000, compress 0x0/0x0/0x0, omap 0x31b03, meta 0x3d3e4fd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 248 ms_handle_reset con 0x55e2693ca800 session 0x55e2667dca80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131424256 unmapped: 44146688 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 248 heartbeat osd_stat(store_statfs(0x4f8c40000/0x0/0x4ffc00000, data 0x30e07d6/0x324a000, compress 0x0/0x0/0x0, omap 0x31b03, meta 0x3d3e4fd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:29.485930+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 248 handle_osd_map epochs [248,249], i have 248, src has [1,249]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cbc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 249 ms_handle_reset con 0x55e2693cbc00 session 0x55e2680ed340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133455872 unmapped: 42115072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:30.486297+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 249 handle_osd_map epochs [249,250], i have 249, src has [1,250]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 249 handle_osd_map epochs [250,250], i have 250, src has [1,250]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 ms_handle_reset con 0x55e2693cb800 session 0x55e26814c000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 ms_handle_reset con 0x55e2693cb000 session 0x55e267f86fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 43466752 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f87d6000/0x0/0x4ffc00000, data 0x3547f0e/0x36b4000, compress 0x0/0x0/0x0, omap 0x32415, meta 0x3d3dbeb), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:31.486413+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 heartbeat osd_stat(store_statfs(0x4f87d6000/0x0/0x4ffc00000, data 0x3547f0e/0x36b4000, compress 0x0/0x0/0x0, omap 0x32415, meta 0x3d3dbeb), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 ms_handle_reset con 0x55e266798400 session 0x55e26818a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132104192 unmapped: 43466752 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:32.486555+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132120576 unmapped: 43450368 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1957184 data_alloc: 234881024 data_used: 17423668
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:33.486702+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 43401216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2693ca800 session 0x55e267d4b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:34.486870+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 43401216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:35.487092+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cbc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2693cbc00 session 0x55e2680ed500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 43401216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:36.487258+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2661cf800 session 0x55e26b760000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132169728 unmapped: 43401216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:37.487470+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 heartbeat osd_stat(store_statfs(0x4f87ce000/0x0/0x4ffc00000, data 0x354bb7f/0x36bc000, compress 0x0/0x0/0x0, omap 0x32d25, meta 0x3d3d2db), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 43368448 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e266798400 session 0x55e2674d61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.643898964s of 11.924663544s, submitted: 148
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2693ca800 session 0x55e268142380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1960587 data_alloc: 234881024 data_used: 17423668
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:38.487630+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132202496 unmapped: 43368448 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:39.487756+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2693cb000 session 0x55e2662f9a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cbc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2693cbc00 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 43327488 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:40.487898+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e2661cf800 session 0x55e2659ac700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 ms_handle_reset con 0x55e266798400 session 0x55e26b760c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 42975232 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 251 handle_osd_map epochs [251,252], i have 251, src has [1,252]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 252 heartbeat osd_stat(store_statfs(0x4f87d2000/0x0/0x4ffc00000, data 0x354bb0d/0x36ba000, compress 0x0/0x0/0x0, omap 0x32f02, meta 0x3d3d0fe), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:41.488064+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 42975232 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:42.488187+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132636672 unmapped: 42934272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 253 ms_handle_reset con 0x55e2685d5400 session 0x55e265a05dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1969772 data_alloc: 234881024 data_used: 17472835
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 253 ms_handle_reset con 0x55e2668c0400 session 0x55e26814cc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:43.488326+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 253 heartbeat osd_stat(store_statfs(0x4f879f000/0x0/0x4ffc00000, data 0x357a13d/0x36eb000, compress 0x0/0x0/0x0, omap 0x333f7, meta 0x3d3cc09), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 42917888 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:44.488456+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 42917888 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:45.488621+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 42917888 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:46.488787+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132653056 unmapped: 42917888 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:47.488935+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 132808704 unmapped: 42762240 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1970050 data_alloc: 234881024 data_used: 17472737
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:48.489090+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.194381714s of 10.291630745s, submitted: 62
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 ms_handle_reset con 0x55e2668c0800 session 0x55e2674d6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133857280 unmapped: 41713664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:49.489233+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f879a000/0x0/0x4ffc00000, data 0x357bd28/0x36f0000, compress 0x0/0x0/0x0, omap 0x337b5, meta 0x3d3c84b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133857280 unmapped: 41713664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 ms_handle_reset con 0x55e2661cf800 session 0x55e267f876c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:50.489361+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 ms_handle_reset con 0x55e266798400 session 0x55e26674ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 heartbeat osd_stat(store_statfs(0x4f879a000/0x0/0x4ffc00000, data 0x357bd28/0x36f0000, compress 0x0/0x0/0x0, omap 0x337b5, meta 0x3d3c84b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 ms_handle_reset con 0x55e2668c1c00 session 0x55e2663968c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133857280 unmapped: 41713664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 254 handle_osd_map epochs [254,255], i have 254, src has [1,255]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 255 ms_handle_reset con 0x55e2668c1800 session 0x55e26573c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:51.489486+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 41697280 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:52.489666+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 255 heartbeat osd_stat(store_statfs(0x4f8797000/0x0/0x4ffc00000, data 0x357d7a7/0x36f3000, compress 0x0/0x0/0x0, omap 0x33b18, meta 0x3d3c4e8), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133890048 unmapped: 41680896 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 256 ms_handle_reset con 0x55e2668c0800 session 0x55e2697a7a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1979909 data_alloc: 234881024 data_used: 17473322
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:53.489824+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 41631744 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:54.489908+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134152192 unmapped: 41418752 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:55.490241+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e2668c1c00 session 0x55e2683befc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134234112 unmapped: 41336832 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e2685d5400 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e2668c0400 session 0x55e2659acc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26a3b2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e26a3b2400 session 0x55e2683bf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 ms_handle_reset con 0x55e267dce800 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:56.490365+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134840320 unmapped: 40730624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 257 handle_osd_map epochs [257,258], i have 257, src has [1,258]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:57.490505+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 258 ms_handle_reset con 0x55e2668c0400 session 0x55e267fdf340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 258 ms_handle_reset con 0x55e2661cf800 session 0x55e26573c700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 258 heartbeat osd_stat(store_statfs(0x4f8742000/0x0/0x4ffc00000, data 0x35ceb07/0x3748000, compress 0x0/0x0/0x0, omap 0x34800, meta 0x3d3b800), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 40599552 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 259 ms_handle_reset con 0x55e2668c1c00 session 0x55e26818b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 259 ms_handle_reset con 0x55e2685d5400 session 0x55e267f2ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 259 ms_handle_reset con 0x55e2693ca000 session 0x55e267f86a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2017510 data_alloc: 234881024 data_used: 18419663
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:58.490661+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 259 ms_handle_reset con 0x55e2685d5400 session 0x55e265b388c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.136558533s of 10.289398193s, submitted: 77
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135110656 unmapped: 40460288 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:59.490796+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 260 ms_handle_reset con 0x55e2661cf800 session 0x55e2662f8700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 260 ms_handle_reset con 0x55e2668c0400 session 0x55e26814cc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 40198144 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 260 handle_osd_map epochs [261,261], i have 260, src has [1,261]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 261 ms_handle_reset con 0x55e2668c1c00 session 0x55e267f2a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:00.490965+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 261 ms_handle_reset con 0x55e266798400 session 0x55e2697a7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135471104 unmapped: 40099840 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:01.491120+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 261 ms_handle_reset con 0x55e266799000 session 0x55e267d4a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 261 ms_handle_reset con 0x55e267c81400 session 0x55e2697a7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 261 handle_osd_map epochs [262,262], i have 261, src has [1,262]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127885312 unmapped: 47685632 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 ms_handle_reset con 0x55e2668c0400 session 0x55e266397a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 ms_handle_reset con 0x55e2661cf800 session 0x55e2681b2fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 ms_handle_reset con 0x55e2661cf800 session 0x55e26818aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 ms_handle_reset con 0x55e2668c1c00 session 0x55e2683be700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:02.491235+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 ms_handle_reset con 0x55e266798400 session 0x55e26573cfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f8efb000/0x0/0x4ffc00000, data 0x28a5d70/0x2a26000, compress 0x0/0x0/0x0, omap 0x36077, meta 0x3d39f89), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 47693824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1893187 data_alloc: 218103808 data_used: 8369261
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:03.491369+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 47693824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:04.491509+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f8efb000/0x0/0x4ffc00000, data 0x28a5d8f/0x2a26000, compress 0x0/0x0/0x0, omap 0x36077, meta 0x3d39f89), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 heartbeat osd_stat(store_statfs(0x4f8efb000/0x0/0x4ffc00000, data 0x28a5d8f/0x2a26000, compress 0x0/0x0/0x0, omap 0x36077, meta 0x3d39f89), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 47693824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:05.491865+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 263 ms_handle_reset con 0x55e266799000 session 0x55e267d4b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 47693824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:06.492147+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 264 ms_handle_reset con 0x55e2668c0400 session 0x55e26818b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127877120 unmapped: 47693824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 264 heartbeat osd_stat(store_statfs(0x4f945e000/0x0/0x4ffc00000, data 0x28a93c6/0x2a2c000, compress 0x0/0x0/0x0, omap 0x3668a, meta 0x3d39976), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:07.493418+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 264 handle_osd_map epochs [264,265], i have 264, src has [1,265]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 ms_handle_reset con 0x55e2661cf800 session 0x55e2667ddc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 ms_handle_reset con 0x55e266798400 session 0x55e26818ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 ms_handle_reset con 0x55e266799000 session 0x55e267d4b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 ms_handle_reset con 0x55e2668c0400 session 0x55e2680ec540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 47710208 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1899909 data_alloc: 218103808 data_used: 8369798
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:08.493569+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 47710208 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c1c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 ms_handle_reset con 0x55e2668c1c00 session 0x55e2683b6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.383840561s of 10.614814758s, submitted: 136
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:09.493698+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 ms_handle_reset con 0x55e2661cf800 session 0x55e2683b7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 ms_handle_reset con 0x55e266798400 session 0x55e268143880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127885312 unmapped: 47685632 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 ms_handle_reset con 0x55e266799000 session 0x55e267f2ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:10.545163+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 ms_handle_reset con 0x55e2668c0400 session 0x55e2659acc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 ms_handle_reset con 0x55e267fb3000 session 0x55e267f86a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 266 handle_osd_map epochs [266,267], i have 266, src has [1,267]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e2661cf800 session 0x55e266397180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 44056576 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:11.545316+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e266798400 session 0x55e267a701c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 44056576 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:12.545555+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9456000/0x0/0x4ffc00000, data 0x28ae798/0x2a32000, compress 0x0/0x0/0x0, omap 0x3723d, meta 0x3d38dc3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 44056576 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904674 data_alloc: 234881024 data_used: 11777130
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:13.546007+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131514368 unmapped: 44056576 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e26cf90c00 session 0x55e265a048c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e26cf90800 session 0x55e267a71180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e269e65400 session 0x55e268142e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:14.546077+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e269e65000 session 0x55e267fdec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f945b000/0x0/0x4ffc00000, data 0x28ae736/0x2a31000, compress 0x0/0x0/0x0, omap 0x3723d, meta 0x3d38dc3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e2661cf800 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 43966464 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:15.546242+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 heartbeat osd_stat(store_statfs(0x4f9011000/0x0/0x4ffc00000, data 0x2cf8736/0x2e7b000, compress 0x0/0x0/0x0, omap 0x37414, meta 0x3d38bec), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131604480 unmapped: 43966464 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:16.546727+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e2693ca800 session 0x55e267f87dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 ms_handle_reset con 0x55e2693cb000 session 0x55e267fdf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 268 ms_handle_reset con 0x55e266798400 session 0x55e268142fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 268 ms_handle_reset con 0x55e266798400 session 0x55e26814d880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 44785664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:17.546917+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130785280 unmapped: 44785664 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1954946 data_alloc: 234881024 data_used: 11648071
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 269 ms_handle_reset con 0x55e2661cf800 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:18.547127+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e2693ca800 session 0x55e268143c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e26cf90800 session 0x55e267f2b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130138112 unmapped: 45432832 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e26cf90c00 session 0x55e26814da40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e269e65400 session 0x55e2683bf340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e2661cf800 session 0x55e267d4aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:19.547288+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 45416448 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 ms_handle_reset con 0x55e266798400 session 0x55e265a05dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:20.547444+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 270 handle_osd_map epochs [270,271], i have 270, src has [1,271]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.467450142s of 11.787580490s, submitted: 181
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130154496 unmapped: 45416448 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f8bf2000/0x0/0x4ffc00000, data 0x3110fa6/0x3298000, compress 0x0/0x0/0x0, omap 0x38198, meta 0x3d37e68), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:21.547613+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e2650d8400 session 0x55e26b760700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e2668c1000 session 0x55e267f86000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e266798400 session 0x55e2681016c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e2661cf800 session 0x55e2697a7a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e2650d8400 session 0x55e267a95dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130211840 unmapped: 45359104 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 ms_handle_reset con 0x55e2693ca800 session 0x55e2697a7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:22.547789+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 heartbeat osd_stat(store_statfs(0x4f873c000/0x0/0x4ffc00000, data 0x35c7b73/0x374f000, compress 0x0/0x0/0x0, omap 0x38305, meta 0x3d37cfb), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 271 handle_osd_map epochs [271,272], i have 271, src has [1,272]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e26cf90800 session 0x55e2683befc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130211840 unmapped: 45359104 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e2650d8400 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e2661cf800 session 0x55e267fde1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2000862 data_alloc: 234881024 data_used: 11535598
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:23.548131+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e266798400 session 0x55e268100380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e269e65400 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693ca800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e2693ca800 session 0x55e26b761c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 48644096 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:24.548510+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 48644096 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e2661cf800 session 0x55e267f2aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 ms_handle_reset con 0x55e2650d8400 session 0x55e26814c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:25.548746+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 273 ms_handle_reset con 0x55e266798400 session 0x55e2680ed6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 273 ms_handle_reset con 0x55e269e65400 session 0x55e2667ddc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 126951424 unmapped: 48619520 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:26.548902+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 274 ms_handle_reset con 0x55e268fb9800 session 0x55e26814d500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 274 ms_handle_reset con 0x55e268fb9000 session 0x55e267f87880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 274 ms_handle_reset con 0x55e2650d8400 session 0x55e26b760e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 274 ms_handle_reset con 0x55e2661cf800 session 0x55e265a05880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 274 handle_osd_map epochs [274,275], i have 274, src has [1,275]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128040960 unmapped: 47529984 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 275 ms_handle_reset con 0x55e266798400 session 0x55e26b761a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:27.549080+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 275 handle_osd_map epochs [275,276], i have 275, src has [1,276]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128040960 unmapped: 47529984 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e26cdc2c00 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e269e65400 session 0x55e266396a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:28.549201+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1837376 data_alloc: 218103808 data_used: 6885649
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e2661cf800 session 0x55e2662f9180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e2650d8400 session 0x55e2683bfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 heartbeat osd_stat(store_statfs(0x4fa457000/0x0/0x4ffc00000, data 0x18aa290/0x1a33000, compress 0x0/0x0/0x0, omap 0x39839, meta 0x3d367c7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 47513600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:29.549322+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e266798400 session 0x55e267d4a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 ms_handle_reset con 0x55e268fb9000 session 0x55e26814c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 47513600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:30.549589+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 47513600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:31.549808+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 277 ms_handle_reset con 0x55e2650d8400 session 0x55e26674ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.650381088s of 10.994046211s, submitted: 187
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128057344 unmapped: 47513600 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:32.550000+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 278 ms_handle_reset con 0x55e2661cf800 session 0x55e2662f8a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127868928 unmapped: 47702016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:33.550140+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1822248 data_alloc: 218103808 data_used: 6886246
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 278 ms_handle_reset con 0x55e268fb9000 session 0x55e2681b2380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127795200 unmapped: 47775744 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 278 handle_osd_map epochs [278,279], i have 278, src has [1,279]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:34.550310+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bd87800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 279 ms_handle_reset con 0x55e269e65400 session 0x55e2681b2e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 279 heartbeat osd_stat(store_statfs(0x4fa44d000/0x0/0x4ffc00000, data 0x18af5bb/0x1a3d000, compress 0x0/0x0/0x0, omap 0x3a4b6, meta 0x3d35b4a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 47759360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:35.550517+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 279 ms_handle_reset con 0x55e26bd87800 session 0x55e2697a6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 280 ms_handle_reset con 0x55e266798400 session 0x55e2697a6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 47759360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:36.550704+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bd87800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e26bd87800 session 0x55e26814ce00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e2650d8400 session 0x55e26814c380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 47759360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:37.550825+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 47759360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 heartbeat osd_stat(store_statfs(0x4fa447000/0x0/0x4ffc00000, data 0x18b2bf2/0x1a43000, compress 0x0/0x0/0x0, omap 0x3ae04, meta 0x3d351fc), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:38.550980+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1831515 data_alloc: 218103808 data_used: 6886831
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661cf800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e268fb9000 session 0x55e264087340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e2661cf800 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e2650d8400 session 0x55e2667dd180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e266798400 session 0x55e26674bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 47759360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 ms_handle_reset con 0x55e268fb9000 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:39.551095+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bd87800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 281 handle_osd_map epochs [281,282], i have 281, src has [1,282]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 282 ms_handle_reset con 0x55e26bd87800 session 0x55e2674d6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 282 ms_handle_reset con 0x55e26bc7bc00 session 0x55e267d4a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 282 ms_handle_reset con 0x55e269e65400 session 0x55e26573c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 282 heartbeat osd_stat(store_statfs(0x4fa369000/0x0/0x4ffc00000, data 0x198f78e/0x1b21000, compress 0x0/0x0/0x0, omap 0x3b1fb, meta 0x3d34e05), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128131072 unmapped: 47439872 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:40.551249+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 128131072 unmapped: 47439872 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:41.551416+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 282 handle_osd_map epochs [282,283], i have 282, src has [1,283]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 46374912 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 283 heartbeat osd_stat(store_statfs(0x4fa364000/0x0/0x4ffc00000, data 0x199137e/0x1b24000, compress 0x0/0x0/0x0, omap 0x3b35d, meta 0x3d34ca3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:42.551657+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.955427170s of 10.439243317s, submitted: 88
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 284 ms_handle_reset con 0x55e2650d8400 session 0x55e2662f9a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 284 ms_handle_reset con 0x55e266798400 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 46374912 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:43.551814+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1850687 data_alloc: 218103808 data_used: 6888029
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 ms_handle_reset con 0x55e268fb9000 session 0x55e2680eda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bd87800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 ms_handle_reset con 0x55e26bd87800 session 0x55e2680ed6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129261568 unmapped: 46309376 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:44.551963+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 ms_handle_reset con 0x55e2650d8400 session 0x55e2681001c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 ms_handle_reset con 0x55e266798400 session 0x55e267fdfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 heartbeat osd_stat(store_statfs(0x4fa35e000/0x0/0x4ffc00000, data 0x1994be8/0x1b2a000, compress 0x0/0x0/0x0, omap 0x3bcdc, meta 0x3d34324), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 46276608 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:45.552204+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 286 ms_handle_reset con 0x55e269e65400 session 0x55e26818ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 287 heartbeat osd_stat(store_statfs(0x4fa356000/0x0/0x4ffc00000, data 0x199683d/0x1b30000, compress 0x0/0x0/0x0, omap 0x3c365, meta 0x3d33c9b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 287 ms_handle_reset con 0x55e26cf90400 session 0x55e26674a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 46276608 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:46.552354+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 287 ms_handle_reset con 0x55e268fb9000 session 0x55e2681b2a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 46276608 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:47.552518+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129294336 unmapped: 46276608 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:48.552689+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1867225 data_alloc: 218103808 data_used: 6888756
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 287 ms_handle_reset con 0x55e2650d8400 session 0x55e267f87a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf91c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 288 ms_handle_reset con 0x55e26cf91c00 session 0x55e267a95dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 288 ms_handle_reset con 0x55e26cf90400 session 0x55e2659addc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 288 heartbeat osd_stat(store_statfs(0x4fa358000/0x0/0x4ffc00000, data 0x19984c7/0x1b34000, compress 0x0/0x0/0x0, omap 0x3c7d9, meta 0x3d33827), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129310720 unmapped: 46260224 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:49.552814+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 288 ms_handle_reset con 0x55e268fb8000 session 0x55e267fdfc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 heartbeat osd_stat(store_statfs(0x4fa353000/0x0/0x4ffc00000, data 0x199a07f/0x1b37000, compress 0x0/0x0/0x0, omap 0x3cb4d, meta 0x3d334b3), peers [0,1] op hist [1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 ms_handle_reset con 0x55e26cf90000 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 ms_handle_reset con 0x55e266798400 session 0x55e2681b2c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 ms_handle_reset con 0x55e2650d8400 session 0x55e268142a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129318912 unmapped: 46252032 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:50.552956+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 ms_handle_reset con 0x55e268fb8000 session 0x55e26818b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 heartbeat osd_stat(store_statfs(0x4fa34d000/0x0/0x4ffc00000, data 0x199bc99/0x1b3b000, compress 0x0/0x0/0x0, omap 0x3cd16, meta 0x3d332ea), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf90400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf91c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 46235648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:51.553085+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e268fb9000 session 0x55e2680eda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e26cf90400 session 0x55e268100540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e2650d8400 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e266798400 session 0x55e267f681c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e268fb8000 session 0x55e2674d68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e268fb9000 session 0x55e26b761dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e269e65000 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2650d8400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 46235648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:52.553218+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:53.553348+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129335296 unmapped: 46235648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1892860 data_alloc: 218103808 data_used: 7635454
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.764011383s of 10.985102654s, submitted: 165
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 ms_handle_reset con 0x55e268fb8000 session 0x55e267f86fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 ms_handle_reset con 0x55e268fb9000 session 0x55e2697a7dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 heartbeat osd_stat(store_statfs(0x4fa24b000/0x0/0x4ffc00000, data 0x1a9f87e/0x1c41000, compress 0x0/0x0/0x0, omap 0x3d543, meta 0x3d32abd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:54.553449+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 46219264 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 ms_handle_reset con 0x55e269e65400 session 0x55e2680ec540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:55.553597+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129351680 unmapped: 46219264 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 ms_handle_reset con 0x55e269e64000 session 0x55e2697a7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 ms_handle_reset con 0x55e266798400 session 0x55e26674a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 292 ms_handle_reset con 0x55e268fb8000 session 0x55e267a71dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:56.553718+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129368064 unmapped: 46202880 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 293 ms_handle_reset con 0x55e268fb9000 session 0x55e267f2a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 293 ms_handle_reset con 0x55e269e64000 session 0x55e265a05880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:57.553865+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129400832 unmapped: 46170112 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 ms_handle_reset con 0x55e26bc7b000 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7ac00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 ms_handle_reset con 0x55e26bc7ac00 session 0x55e2681b2380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 ms_handle_reset con 0x55e26bc7b400 session 0x55e26573c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:58.554004+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 heartbeat osd_stat(store_statfs(0x4fa23d000/0x0/0x4ffc00000, data 0x1aa4a7d/0x1c4b000, compress 0x0/0x0/0x0, omap 0x3ddcd, meta 0x3d32233), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 46022656 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1913202 data_alloc: 218103808 data_used: 8693664
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:59.554156+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 46022656 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:00.554273+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 46022656 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 ms_handle_reset con 0x55e268fb8000 session 0x55e2683b6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:01.554434+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129548288 unmapped: 46022656 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 295 ms_handle_reset con 0x55e269e64000 session 0x55e26674bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:02.554582+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 129589248 unmapped: 45981696 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 296 heartbeat osd_stat(store_statfs(0x4fa23b000/0x0/0x4ffc00000, data 0x1aa8403/0x1c4f000, compress 0x0/0x0/0x0, omap 0x3e8b1, meta 0x3d3174f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 296 ms_handle_reset con 0x55e268fb9000 session 0x55e2683bf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:03.554705+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 130170880 unmapped: 45400064 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1932069 data_alloc: 218103808 data_used: 8689568
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.274160385s of 10.202681541s, submitted: 85
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:04.554853+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131719168 unmapped: 43851776 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:05.555028+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 131850240 unmapped: 43720704 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 297 heartbeat osd_stat(store_statfs(0x4f9d7c000/0x0/0x4ffc00000, data 0x1f5eb8f/0x2108000, compress 0x0/0x0/0x0, omap 0x3ee00, meta 0x3d31200), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:06.555184+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133390336 unmapped: 42180608 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:07.555359+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133398528 unmapped: 42172416 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 297 ms_handle_reset con 0x55e26bc7b000 session 0x55e265b388c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:08.555469+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133406720 unmapped: 42164224 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1961021 data_alloc: 234881024 data_used: 9652128
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:09.555643+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 38617088 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:10.555834+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 136953856 unmapped: 38617088 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 297 ms_handle_reset con 0x55e268fb8000 session 0x55e2667dd180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:11.556017+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135208960 unmapped: 40361984 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 ms_handle_reset con 0x55e26bc7b400 session 0x55e267f2a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 heartbeat osd_stat(store_statfs(0x4f8fdf000/0x0/0x4ffc00000, data 0x2cff7c5/0x2eab000, compress 0x0/0x0/0x0, omap 0x3f1bf, meta 0x3d30e41), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 ms_handle_reset con 0x55e269e64000 session 0x55e2683be700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 ms_handle_reset con 0x55e26cf91c00 session 0x55e2683bec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 ms_handle_reset con 0x55e2650d8400 session 0x55e267fdf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:12.560492+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 299 ms_handle_reset con 0x55e268fb9000 session 0x55e267d4a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134455296 unmapped: 41115648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 299 ms_handle_reset con 0x55e268fb8000 session 0x55e26674ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:13.561170+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134455296 unmapped: 41115648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2050678 data_alloc: 234881024 data_used: 9910274
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.547210217s of 10.038742065s, submitted: 167
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e269e64000 session 0x55e26814d500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:14.561394+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e26bc7b400 session 0x55e2697a6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134324224 unmapped: 41246720 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cf91c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e26cf91c00 session 0x55e267f2ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:15.562005+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134324224 unmapped: 41246720 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:16.562187+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134324224 unmapped: 41246720 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 heartbeat osd_stat(store_statfs(0x4f8fb1000/0x0/0x4ffc00000, data 0x2d2beef/0x2ed9000, compress 0x0/0x0/0x0, omap 0x3f6a7, meta 0x3d30959), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e268fb9000 session 0x55e26814c700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e268fb8000 session 0x55e2683b7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 ms_handle_reset con 0x55e26bc7b400 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:17.562403+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 301 ms_handle_reset con 0x55e26bc7a800 session 0x55e2680ed6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134463488 unmapped: 41107456 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 301 ms_handle_reset con 0x55e26bc7a400 session 0x55e265b8a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 301 ms_handle_reset con 0x55e269e64000 session 0x55e267a95a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:18.562671+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 40960000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2051586 data_alloc: 234881024 data_used: 9910887
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:19.562837+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 40960000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 301 ms_handle_reset con 0x55e268fb8000 session 0x55e268142380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 302 ms_handle_reset con 0x55e268fb9000 session 0x55e26b7601c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:20.562989+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 40960000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 302 heartbeat osd_stat(store_statfs(0x4f8fa8000/0x0/0x4ffc00000, data 0x2d32697/0x2ee2000, compress 0x0/0x0/0x0, omap 0x3fdb9, meta 0x3d30247), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:21.563158+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 40960000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:22.563332+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 40960000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 302 handle_osd_map epochs [303,304], i have 302, src has [1,304]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 304 ms_handle_reset con 0x55e26bc7a400 session 0x55e2680edc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:23.563487+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134619136 unmapped: 40951808 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058168 data_alloc: 234881024 data_used: 9910887
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:24.563635+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134619136 unmapped: 40951808 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.785130501s of 10.868734360s, submitted: 58
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 304 ms_handle_reset con 0x55e26bc7a800 session 0x55e2680eda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:25.563781+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 304 heartbeat osd_stat(store_statfs(0x4f8fa3000/0x0/0x4ffc00000, data 0x2d35d61/0x2ee9000, compress 0x0/0x0/0x0, omap 0x400ea, meta 0x3d2ff16), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134627328 unmapped: 40943616 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:26.563918+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134635520 unmapped: 40935424 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:27.564102+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 40919040 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 305 heartbeat osd_stat(store_statfs(0x4f8fa0000/0x0/0x4ffc00000, data 0x2d3796d/0x2eec000, compress 0x0/0x0/0x0, omap 0x40818, meta 0x3d2f7e8), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:28.564311+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134651904 unmapped: 40919040 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064464 data_alloc: 234881024 data_used: 10023729
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 305 ms_handle_reset con 0x55e26bc7b400 session 0x55e26817a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:29.564483+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134660096 unmapped: 40910848 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 306 ms_handle_reset con 0x55e269e64000 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 306 heartbeat osd_stat(store_statfs(0x4f8f9d000/0x0/0x4ffc00000, data 0x2d39517/0x2eed000, compress 0x0/0x0/0x0, omap 0x40bff, meta 0x3d2f401), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:30.564625+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134676480 unmapped: 40894464 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:31.564769+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 ms_handle_reset con 0x55e26bc7a400 session 0x55e2659ac540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134684672 unmapped: 40886272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 ms_handle_reset con 0x55e26bc7a000 session 0x55e267f87dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:32.565197+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134684672 unmapped: 40886272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8f97000/0x0/0x4ffc00000, data 0x2d3ccfb/0x2ef1000, compress 0x0/0x0/0x0, omap 0x411d2, meta 0x3d2ee2e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:33.565331+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134684672 unmapped: 40886272 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2070354 data_alloc: 234881024 data_used: 10023517
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:34.565584+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 40878080 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:35.565818+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134692864 unmapped: 40878080 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8f97000/0x0/0x4ffc00000, data 0x2d3ccfb/0x2ef1000, compress 0x0/0x0/0x0, omap 0x411d2, meta 0x3d2ee2e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.848226547s of 11.219658852s, submitted: 105
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 ms_handle_reset con 0x55e266799400 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 heartbeat osd_stat(store_statfs(0x4f8f9b000/0x0/0x4ffc00000, data 0x2d3ccfb/0x2ef1000, compress 0x0/0x0/0x0, omap 0x41264, meta 0x3d2ed9c), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:36.565952+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 40599552 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 309 ms_handle_reset con 0x55e269e64000 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:37.566081+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135004160 unmapped: 40566784 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 ms_handle_reset con 0x55e26bc7a000 session 0x55e2697a7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 ms_handle_reset con 0x55e2668c0400 session 0x55e2683b6a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 ms_handle_reset con 0x55e26bc7a400 session 0x55e268143880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:38.566237+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 40517632 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2086251 data_alloc: 234881024 data_used: 10186860
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:39.566417+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135053312 unmapped: 40517632 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 heartbeat osd_stat(store_statfs(0x4f7df1000/0x0/0x4ffc00000, data 0x2d40515/0x2ef9000, compress 0x0/0x0/0x0, omap 0x418cd, meta 0x4ece733), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 ms_handle_reset con 0x55e266799800 session 0x55e2697a6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 ms_handle_reset con 0x55e266798c00 session 0x55e2683b61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:40.566608+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 40722432 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 311 ms_handle_reset con 0x55e266799800 session 0x55e26814ce00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 311 ms_handle_reset con 0x55e269e64000 session 0x55e2674d7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:41.566837+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 40722432 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 311 ms_handle_reset con 0x55e2668c0400 session 0x55e268143a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 312 ms_handle_reset con 0x55e26bc7a000 session 0x55e265b388c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 312 ms_handle_reset con 0x55e26bc7b400 session 0x55e26814d880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 312 ms_handle_reset con 0x55e266798c00 session 0x55e2674d6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 312 ms_handle_reset con 0x55e266799000 session 0x55e267fde000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:42.567098+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134856704 unmapped: 40714240 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 313 ms_handle_reset con 0x55e266799800 session 0x55e267f86700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 313 heartbeat osd_stat(store_statfs(0x4f7de8000/0x0/0x4ffc00000, data 0x2d44142/0x2f00000, compress 0x0/0x0/0x0, omap 0x41ef0, meta 0x4ece110), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:43.567391+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 40706048 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2094096 data_alloc: 234881024 data_used: 10191655
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:44.567578+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 313 ms_handle_reset con 0x55e269e64000 session 0x55e266397c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 40706048 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 314 ms_handle_reset con 0x55e266799000 session 0x55e267f87500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 314 ms_handle_reset con 0x55e266798c00 session 0x55e2659ac700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:45.567792+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 314 heartbeat osd_stat(store_statfs(0x4f7de1000/0x0/0x4ffc00000, data 0x2d478fa/0x2f07000, compress 0x0/0x0/0x0, omap 0x42791, meta 0x4ecd86f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 40681472 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 314 ms_handle_reset con 0x55e266799800 session 0x55e2681b3dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7a400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 315 ms_handle_reset con 0x55e2668c0400 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 315 ms_handle_reset con 0x55e26bc7a400 session 0x55e267a701c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:46.567956+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.238942146s of 10.736113548s, submitted: 86
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 316 ms_handle_reset con 0x55e26bc7b400 session 0x55e26573c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 41205760 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 316 ms_handle_reset con 0x55e266798c00 session 0x55e2681b28c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 316 heartbeat osd_stat(store_statfs(0x4f7de0000/0x0/0x4ffc00000, data 0x2d49607/0x2f0a000, compress 0x0/0x0/0x0, omap 0x42b2d, meta 0x4ecd4d3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:47.568120+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134365184 unmapped: 41205760 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 317 ms_handle_reset con 0x55e266799000 session 0x55e26b760000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:48.568228+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134389760 unmapped: 41181184 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2111160 data_alloc: 234881024 data_used: 10191655
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e266799800 session 0x55e268143dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e2668c0400 session 0x55e267f69340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:49.568410+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 40116224 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e2668c0800 session 0x55e267a95dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:50.568566+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135454720 unmapped: 40116224 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e2668c0400 session 0x55e267f868c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e269e65400 session 0x55e26b761500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 ms_handle_reset con 0x55e26bc7b800 session 0x55e26674ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:51.568695+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 41771008 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 319 ms_handle_reset con 0x55e266799800 session 0x55e265b8b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 319 ms_handle_reset con 0x55e266799000 session 0x55e2662f8700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:52.568839+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 319 handle_osd_map epochs [319,320], i have 319, src has [1,320]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 320 ms_handle_reset con 0x55e26bc7b400 session 0x55e268101880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f8c8c000/0x0/0x4ffc00000, data 0x1e97dc3/0x2060000, compress 0x0/0x0/0x0, omap 0x43938, meta 0x4ecc6c8), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 41754624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 320 ms_handle_reset con 0x55e266798c00 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 320 heartbeat osd_stat(store_statfs(0x4f8c87000/0x0/0x4ffc00000, data 0x1e9997b/0x2063000, compress 0x0/0x0/0x0, omap 0x43b13, meta 0x4ecc4ed), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:53.568978+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 41754624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2031064 data_alloc: 218103808 data_used: 8865192
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:54.569196+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 41754624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 320 handle_osd_map epochs [320,321], i have 320, src has [1,321]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 321 ms_handle_reset con 0x55e2668c0400 session 0x55e268100540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 321 ms_handle_reset con 0x55e2668c0800 session 0x55e2683b6a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:55.569536+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 41795584 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:56.569735+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.853372574s of 10.001564026s, submitted: 139
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133775360 unmapped: 41795584 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 322 ms_handle_reset con 0x55e266798c00 session 0x55e26b760c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:57.569942+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133799936 unmapped: 41771008 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 323 ms_handle_reset con 0x55e266799000 session 0x55e266396a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 323 ms_handle_reset con 0x55e26bc7b400 session 0x55e267f69880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:58.570086+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 323 heartbeat osd_stat(store_statfs(0x4f8c83000/0x0/0x4ffc00000, data 0x1e9e949/0x2067000, compress 0x0/0x0/0x0, omap 0x44b37, meta 0x4ecb4c9), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133750784 unmapped: 41820160 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2035507 data_alloc: 218103808 data_used: 8852904
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:59.570237+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 41811968 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 325 ms_handle_reset con 0x55e2668c0400 session 0x55e26674bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 325 ms_handle_reset con 0x55e26bc7b800 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 325 ms_handle_reset con 0x55e269e65400 session 0x55e26573cfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 325 ms_handle_reset con 0x55e26bc7b800 session 0x55e26814c700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:00.570430+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 133816320 unmapped: 41754624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:01.570595+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 325 handle_osd_map epochs [325,326], i have 325, src has [1,326]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 326 ms_handle_reset con 0x55e266798c00 session 0x55e2680ec000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 40615936 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 326 ms_handle_reset con 0x55e266799000 session 0x55e267a71180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:02.570772+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 326 heartbeat osd_stat(store_statfs(0x4f8c7e000/0x0/0x4ffc00000, data 0x1ea3a54/0x206c000, compress 0x0/0x0/0x0, omap 0x458d3, meta 0x4eca72d), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 40607744 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 326 handle_osd_map epochs [326,327], i have 326, src has [1,327]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:03.570912+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 40583168 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2048335 data_alloc: 234881024 data_used: 9684263
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 327 ms_handle_reset con 0x55e26bc7b400 session 0x55e2697a7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 327 ms_handle_reset con 0x55e2668c0400 session 0x55e2683befc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:04.571092+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 40837120 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:05.571258+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 40837120 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:06.571449+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134733824 unmapped: 40837120 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 327 heartbeat osd_stat(store_statfs(0x4f8c78000/0x0/0x4ffc00000, data 0x1ea568a/0x2070000, compress 0x0/0x0/0x0, omap 0x45c81, meta 0x4eca37f), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.422688484s of 10.630716324s, submitted: 165
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:07.571610+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 40820736 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:08.571805+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 40820736 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2050501 data_alloc: 234881024 data_used: 9684263
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:09.572003+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 40820736 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 328 ms_handle_reset con 0x55e266798c00 session 0x55e2662f9a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:10.572153+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 328 ms_handle_reset con 0x55e266799000 session 0x55e26814c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 40820736 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 328 handle_osd_map epochs [328,329], i have 328, src has [1,329]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 329 ms_handle_reset con 0x55e269e65400 session 0x55e267d4b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 329 heartbeat osd_stat(store_statfs(0x4f8c73000/0x0/0x4ffc00000, data 0x1ea8ced/0x2077000, compress 0x0/0x0/0x0, omap 0x464a8, meta 0x4ec9b58), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:11.572298+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134750208 unmapped: 40820736 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 329 ms_handle_reset con 0x55e26bc7b800 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:12.572460+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc7b800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 330 ms_handle_reset con 0x55e266798c00 session 0x55e2683b7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134782976 unmapped: 40787968 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 331 ms_handle_reset con 0x55e266799000 session 0x55e268142fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:13.572597+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 40779776 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2064419 data_alloc: 234881024 data_used: 9685465
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 332 ms_handle_reset con 0x55e2668c0400 session 0x55e2697a6a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 332 ms_handle_reset con 0x55e266798800 session 0x55e26817ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 332 ms_handle_reset con 0x55e26bc7b800 session 0x55e267f2b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 332 ms_handle_reset con 0x55e269e65400 session 0x55e26674a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:14.572757+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 40763392 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:15.572931+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 40747008 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 333 ms_handle_reset con 0x55e266798800 session 0x55e268142a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:16.573084+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 333 ms_handle_reset con 0x55e266798c00 session 0x55e267f69180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 40747008 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 333 heartbeat osd_stat(store_statfs(0x4f8c65000/0x0/0x4ffc00000, data 0x1eb02b3/0x2085000, compress 0x0/0x0/0x0, omap 0x470ad, meta 0x4ec8f53), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.430857658s of 10.057489395s, submitted: 96
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 333 ms_handle_reset con 0x55e266799000 session 0x55e26814c700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2668c0400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:17.574292+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 134995968 unmapped: 40574976 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:18.574424+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 334 ms_handle_reset con 0x55e2668c0400 session 0x55e268100540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 40558592 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2077853 data_alloc: 234881024 data_used: 9685547
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:19.574532+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 334 ms_handle_reset con 0x55e266798800 session 0x55e26814d6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 136052736 unmapped: 39518208 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 334 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x1ed59dd/0x20ab000, compress 0x0/0x0/0x0, omap 0x47697, meta 0x4ec8969), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:20.574696+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 136052736 unmapped: 39518208 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 334 heartbeat osd_stat(store_statfs(0x4f8c42000/0x0/0x4ffc00000, data 0x1ed59cd/0x20aa000, compress 0x0/0x0/0x0, omap 0x47788, meta 0x4ec8878), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:21.574826+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 136052736 unmapped: 39518208 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:22.575110+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:23.575399+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2079407 data_alloc: 234881024 data_used: 9724955
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:24.575582+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:25.575726+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 335 heartbeat osd_stat(store_statfs(0x4f8c3d000/0x0/0x4ffc00000, data 0x1ed74a0/0x20ad000, compress 0x0/0x0/0x0, omap 0x47b39, meta 0x4ec84c7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:26.575834+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 335 handle_osd_map epochs [335,336], i have 335, src has [1,336]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:27.575945+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137109504 unmapped: 38461440 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 336 ms_handle_reset con 0x55e266798c00 session 0x55e265b8ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.092587471s of 10.831639290s, submitted: 66
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 336 ms_handle_reset con 0x55e266799000 session 0x55e2659ada40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:28.576100+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137117696 unmapped: 38453248 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2084747 data_alloc: 234881024 data_used: 9724955
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:29.577611+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 336 ms_handle_reset con 0x55e269e65400 session 0x55e2663961c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 336 ms_handle_reset con 0x55e269672c00 session 0x55e2659ac700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137134080 unmapped: 38436864 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:30.577729+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 337 ms_handle_reset con 0x55e266798800 session 0x55e267a95a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 337 heartbeat osd_stat(store_statfs(0x4f8c36000/0x0/0x4ffc00000, data 0x1edaabb/0x20b3000, compress 0x0/0x0/0x0, omap 0x48354, meta 0x4ec7cac), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:31.577867+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f89ba000/0x0/0x4ffc00000, data 0x21566ab/0x2330000, compress 0x0/0x0/0x0, omap 0x4871f, meta 0x4ec78e1), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:32.577993+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:33.578112+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 338 ms_handle_reset con 0x55e266798c00 session 0x55e26814d340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2119867 data_alloc: 234881024 data_used: 11108379
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:34.578224+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 338 heartbeat osd_stat(store_statfs(0x4f89ba000/0x0/0x4ffc00000, data 0x21566ab/0x2330000, compress 0x0/0x0/0x0, omap 0x4871f, meta 0x4ec78e1), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:35.578445+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138706944 unmapped: 36864000 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 338 handle_osd_map epochs [338,339], i have 338, src has [1,339]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:36.578637+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 339 ms_handle_reset con 0x55e266799000 session 0x55e266397c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138821632 unmapped: 36749312 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 ms_handle_reset con 0x55e269672c00 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e65400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:37.578778+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 ms_handle_reset con 0x55e269e65400 session 0x55e26817b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 36282368 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 ms_handle_reset con 0x55e266798800 session 0x55e2674d7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:38.578899+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 36282368 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2130467 data_alloc: 234881024 data_used: 11502180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 heartbeat osd_stat(store_statfs(0x4f89b3000/0x0/0x4ffc00000, data 0x2159d44/0x2337000, compress 0x0/0x0/0x0, omap 0x48cc6, meta 0x4ec733a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:39.579056+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 36282368 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:40.579304+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.051776886s of 12.381855011s, submitted: 51
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 36282368 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 ms_handle_reset con 0x55e266799000 session 0x55e267f68c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:41.579593+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 340 ms_handle_reset con 0x55e266798c00 session 0x55e2659ad500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 36274176 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:42.579749+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139296768 unmapped: 36274176 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:43.579900+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e269672c00 session 0x55e26573dc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e269673c00 session 0x55e26817afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138731520 unmapped: 36839424 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2141925 data_alloc: 234881024 data_used: 11494086
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e269673c00 session 0x55e26818b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e266798c00 session 0x55e266397dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:44.580117+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e266799000 session 0x55e2667ddc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e269672c00 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f89af000/0x0/0x4ffc00000, data 0x21a190b/0x233d000, compress 0x0/0x0/0x0, omap 0x494ff, meta 0x4ec6b01), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138854400 unmapped: 36716544 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 heartbeat osd_stat(store_statfs(0x4f89af000/0x0/0x4ffc00000, data 0x21a190b/0x233d000, compress 0x0/0x0/0x0, omap 0x494ff, meta 0x4ec6b01), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:45.580260+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138895360 unmapped: 36675584 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e26bc78000 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:46.580407+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e267fb3000 session 0x55e26814ddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e2693cb000 session 0x55e26573c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 ms_handle_reset con 0x55e266798c00 session 0x55e267a71340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 341 handle_osd_map epochs [341,342], i have 341, src has [1,342]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138919936 unmapped: 36651008 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 ms_handle_reset con 0x55e269673c00 session 0x55e268142380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 ms_handle_reset con 0x55e266799000 session 0x55e267fde380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 ms_handle_reset con 0x55e269672c00 session 0x55e2697a7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:47.580527+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 heartbeat osd_stat(store_statfs(0x4f89ce000/0x0/0x4ffc00000, data 0x217f4e5/0x231a000, compress 0x0/0x0/0x0, omap 0x49ba4, meta 0x4ec645c), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 ms_handle_reset con 0x55e266798c00 session 0x55e267a71dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 342 handle_osd_map epochs [342,343], i have 342, src has [1,343]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 343 ms_handle_reset con 0x55e267fb3000 session 0x55e2683b7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 36634624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 343 ms_handle_reset con 0x55e2693cb000 session 0x55e26817ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:48.580665+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138936320 unmapped: 36634624 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 343 ms_handle_reset con 0x55e269673c00 session 0x55e267f69180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2146609 data_alloc: 234881024 data_used: 11401828
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:49.580857+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 343 handle_osd_map epochs [343,344], i have 343, src has [1,344]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138993664 unmapped: 36577280 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e266798c00 session 0x55e2680ec540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e267fb3000 session 0x55e2659ada40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e2693cb000 session 0x55e2659ac700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e269672c00 session 0x55e266397c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:50.733219+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e269673c00 session 0x55e26674a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139001856 unmapped: 36569088 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 heartbeat osd_stat(store_statfs(0x4f8c41000/0x0/0x4ffc00000, data 0x1f08d0f/0x20a9000, compress 0x0/0x0/0x0, omap 0x49e3e, meta 0x4ec61c2), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:51.733366+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 ms_handle_reset con 0x55e266798c00 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.217051506s of 11.371119499s, submitted: 155
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139018240 unmapped: 36552704 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:52.733587+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 345 ms_handle_reset con 0x55e2693cb000 session 0x55e267fde000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36544512 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269672c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:53.733761+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 139026432 unmapped: 36544512 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2138288 data_alloc: 234881024 data_used: 11328191
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:54.733911+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 345 handle_osd_map epochs [345,346], i have 345, src has [1,346]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 345 handle_osd_map epochs [346,346], i have 346, src has [1,346]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 346 ms_handle_reset con 0x55e267fb3000 session 0x55e267d4b6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 35495936 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:55.734112+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 346 heartbeat osd_stat(store_statfs(0x4f8c3e000/0x0/0x4ffc00000, data 0x1ec631c/0x20ac000, compress 0x0/0x0/0x0, omap 0x4a823, meta 0x4ec57dd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140075008 unmapped: 35495936 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 346 handle_osd_map epochs [346,347], i have 346, src has [1,347]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:56.734304+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 heartbeat osd_stat(store_statfs(0x4f8c3e000/0x0/0x4ffc00000, data 0x1ec631c/0x20ac000, compress 0x0/0x0/0x0, omap 0x4a823, meta 0x4ec57dd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140099584 unmapped: 35471360 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 ms_handle_reset con 0x55e269673c00 session 0x55e2663968c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 ms_handle_reset con 0x55e269672c00 session 0x55e267d4a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:57.734463+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 ms_handle_reset con 0x55e268fb8000 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 ms_handle_reset con 0x55e268fb9000 session 0x55e26818ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140107776 unmapped: 35463168 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e26bc78400 session 0x55e2674d7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:58.734592+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e266798c00 session 0x55e267d4bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e267fb3000 session 0x55e2659ad500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140140544 unmapped: 35430400 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2142803 data_alloc: 234881024 data_used: 11324095
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e266798c00 session 0x55e266397180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e267fb3000 session 0x55e26674ae00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 ms_handle_reset con 0x55e268fb8000 session 0x55e2667dc540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:59.735167+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 35414016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:00.735382+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 heartbeat osd_stat(store_statfs(0x4f8c3c000/0x0/0x4ffc00000, data 0x1ec9a15/0x20ad000, compress 0x0/0x0/0x0, omap 0x4b209, meta 0x4ec4df7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 35414016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:01.735536+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.701142311s of 10.068732262s, submitted: 118
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140156928 unmapped: 35414016 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 349 ms_handle_reset con 0x55e268fb9000 session 0x55e2680edc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:02.735780+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 35405824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:03.735942+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2145181 data_alloc: 234881024 data_used: 11328042
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 140165120 unmapped: 35405824 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 349 heartbeat osd_stat(store_statfs(0x4f8c39000/0x0/0x4ffc00000, data 0x1ecb4c4/0x20b1000, compress 0x0/0x0/0x0, omap 0x4b6c9, meta 0x4ec4937), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:04.736110+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2693cb000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 349 ms_handle_reset con 0x55e2693cb000 session 0x55e265a04540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:05.736331+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 349 ms_handle_reset con 0x55e26bc78400 session 0x55e2674d6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:06.736496+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f91dd000/0x0/0x4ffc00000, data 0x19294c4/0x1b0f000, compress 0x0/0x0/0x0, omap 0x4b6c9, meta 0x4ec4937), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 350 ms_handle_reset con 0x55e266798c00 session 0x55e265b38000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:07.736828+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 350 heartbeat osd_stat(store_statfs(0x4f91dd000/0x0/0x4ffc00000, data 0x19294c4/0x1b0f000, compress 0x0/0x0/0x0, omap 0x4b6c9, meta 0x4ec4937), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:08.737094+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2088141 data_alloc: 218103808 data_used: 7039546
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:09.737310+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 350 ms_handle_reset con 0x55e267fb3000 session 0x55e267fdec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:10.737475+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 ms_handle_reset con 0x55e268fb8000 session 0x55e2659ac540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:11.737643+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 heartbeat osd_stat(store_statfs(0x4f91d5000/0x0/0x4ffc00000, data 0x192cc50/0x1b15000, compress 0x0/0x0/0x0, omap 0x4bc15, meta 0x4ec43eb), peers [0,1] op hist [1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 ms_handle_reset con 0x55e268fb9000 session 0x55e267f86a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.112931252s of 10.244342804s, submitted: 29
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 ms_handle_reset con 0x55e266798c00 session 0x55e2674d61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:12.737778+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:13.737885+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 ms_handle_reset con 0x55e267fb3000 session 0x55e2674d68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2093625 data_alloc: 218103808 data_used: 7039530
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:14.738025+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 352 handle_osd_map epochs [352,352], i have 352, src has [1,352]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 352 ms_handle_reset con 0x55e268fb8000 session 0x55e2681b2380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:15.738201+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 352 ms_handle_reset con 0x55e268fb9000 session 0x55e267f69340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 352 handle_osd_map epochs [352,353], i have 352, src has [1,353]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137527296 unmapped: 38043648 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 353 ms_handle_reset con 0x55e26bc78400 session 0x55e26b761c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:16.738348+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 353 ms_handle_reset con 0x55e266798c00 session 0x55e2662f8700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 353 ms_handle_reset con 0x55e267fb3000 session 0x55e2667dc1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 353 handle_osd_map epochs [353,354], i have 353, src has [1,354]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 ms_handle_reset con 0x55e268fb8000 session 0x55e2697a7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 ms_handle_reset con 0x55e268fb9000 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 ms_handle_reset con 0x55e26bc78400 session 0x55e2683b7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 38019072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f91cc000/0x0/0x4ffc00000, data 0x193045a/0x1b1c000, compress 0x0/0x0/0x0, omap 0x4c165, meta 0x4ec3e9b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 ms_handle_reset con 0x55e266798c00 session 0x55e268143c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:17.738549+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 38019072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:18.738678+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2116161 data_alloc: 218103808 data_used: 7040728
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 38019072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:19.738831+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 38019072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:20.738997+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 ms_handle_reset con 0x55e267fb3000 session 0x55e2674d6540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137551872 unmapped: 38019072 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:21.739161+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 heartbeat osd_stat(store_statfs(0x4f90c3000/0x0/0x4ffc00000, data 0x1a3ae93/0x1c27000, compress 0x0/0x0/0x0, omap 0x4c475, meta 0x4ec3b8b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269673c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 355 ms_handle_reset con 0x55e269673c00 session 0x55e267f2aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137469952 unmapped: 38100992 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:22.739291+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137142272 unmapped: 38428672 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.899332047s of 11.061859131s, submitted: 71
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:23.739473+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 355 ms_handle_reset con 0x55e268fb8000 session 0x55e267f2b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 355 ms_handle_reset con 0x55e268fb9000 session 0x55e267d4bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2109506 data_alloc: 218103808 data_used: 7040728
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137158656 unmapped: 38412288 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e267fb3000 session 0x55e2659ad500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e26bc78800 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e266798c00 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:24.739655+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137166848 unmapped: 38404096 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e268fb8000 session 0x55e2683bfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e267dce000 session 0x55e26818b880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e266798c00 session 0x55e2683b68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e267dce000 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:25.739923+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb3000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e268fb8000 session 0x55e26818ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 ms_handle_reset con 0x55e26bc78800 session 0x55e2667ddc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137175040 unmapped: 38395904 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 356 handle_osd_map epochs [356,357], i have 356, src has [1,357]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 ms_handle_reset con 0x55e267fb3000 session 0x55e2680ec000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:26.740110+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 38379520 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f91bf000/0x0/0x4ffc00000, data 0x1937520/0x1b2b000, compress 0x0/0x0/0x0, omap 0x4d21a, meta 0x4ec2de6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:27.740396+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f91bf000/0x0/0x4ffc00000, data 0x1937520/0x1b2b000, compress 0x0/0x0/0x0, omap 0x4d21a, meta 0x4ec2de6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 38379520 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 heartbeat osd_stat(store_statfs(0x4f91bf000/0x0/0x4ffc00000, data 0x1937520/0x1b2b000, compress 0x0/0x0/0x0, omap 0x4d21a, meta 0x4ec2de6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 ms_handle_reset con 0x55e266798c00 session 0x55e2697a76c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 358 ms_handle_reset con 0x55e267dce000 session 0x55e2662f9a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:28.740638+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2125763 data_alloc: 218103808 data_used: 7041411
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 38379520 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 358 ms_handle_reset con 0x55e268fb8000 session 0x55e267a71180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:29.740834+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26bc78800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 358 ms_handle_reset con 0x55e266884000 session 0x55e265a05880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137191424 unmapped: 38379520 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2686e7800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 359 ms_handle_reset con 0x55e26bc78800 session 0x55e2659ace00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 359 heartbeat osd_stat(store_statfs(0x4f91bc000/0x0/0x4ffc00000, data 0x19390d8/0x1b2e000, compress 0x0/0x0/0x0, omap 0x4d615, meta 0x4ec29eb), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 359 ms_handle_reset con 0x55e2686e7800 session 0x55e2683befc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:30.775997+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 359 ms_handle_reset con 0x55e266884000 session 0x55e266397c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:31.776233+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:32.776356+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 359 handle_osd_map epochs [359,360], i have 359, src has [1,360]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.936744690s of 10.178790092s, submitted: 80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 360 ms_handle_reset con 0x55e266798c00 session 0x55e265a048c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:33.776515+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2131066 data_alloc: 218103808 data_used: 7041411
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:34.776750+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 360 heartbeat osd_stat(store_statfs(0x4f91b7000/0x0/0x4ffc00000, data 0x193c620/0x1b33000, compress 0x0/0x0/0x0, omap 0x4dc97, meta 0x4ec2369), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 360 handle_osd_map epochs [360,361], i have 360, src has [1,361]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e265be3c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 361 ms_handle_reset con 0x55e267dce000 session 0x55e26573c1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 361 ms_handle_reset con 0x55e265be3c00 session 0x55e268142a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:35.776998+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 137207808 unmapped: 38363136 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 362 ms_handle_reset con 0x55e266798c00 session 0x55e268142380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:36.777173+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 362 ms_handle_reset con 0x55e268fb8000 session 0x55e267d4b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 362 ms_handle_reset con 0x55e266884000 session 0x55e26818a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138223616 unmapped: 37347328 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 363 ms_handle_reset con 0x55e267dce000 session 0x55e26814c000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2686e7800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 363 ms_handle_reset con 0x55e2686e7800 session 0x55e2667dd500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:37.777371+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 37388288 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:38.777545+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2137398 data_alloc: 218103808 data_used: 7042295
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 37388288 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:39.777699+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 363 ms_handle_reset con 0x55e266798c00 session 0x55e2674d7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138182656 unmapped: 37388288 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 363 ms_handle_reset con 0x55e266884000 session 0x55e267a71880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:40.777869+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 363 handle_osd_map epochs [363,364], i have 363, src has [1,364]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 364 ms_handle_reset con 0x55e267dce000 session 0x55e26814d340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 364 heartbeat osd_stat(store_statfs(0x4f91ae000/0x0/0x4ffc00000, data 0x1943655/0x1b3c000, compress 0x0/0x0/0x0, omap 0x4e8cd, meta 0x4ec1733), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138190848 unmapped: 37380096 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:41.778022+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 364 ms_handle_reset con 0x55e268fb8000 session 0x55e266396380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26802a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138190848 unmapped: 37380096 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:42.778205+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 ms_handle_reset con 0x55e26802a000 session 0x55e2662f8a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138199040 unmapped: 37371904 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:43.778362+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2141222 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138199040 unmapped: 37371904 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:44.778512+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.822528839s of 11.847720146s, submitted: 110
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 ms_handle_reset con 0x55e266798c00 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138199040 unmapped: 37371904 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:45.778763+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 heartbeat osd_stat(store_statfs(0x4f91ae000/0x0/0x4ffc00000, data 0x1945253/0x1b3e000, compress 0x0/0x0/0x0, omap 0x4eb0c, meta 0x4ec14f4), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 ms_handle_reset con 0x55e266884000 session 0x55e26674bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138199040 unmapped: 37371904 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 ms_handle_reset con 0x55e267dce000 session 0x55e26814ddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:46.778899+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26802a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 ms_handle_reset con 0x55e26802a000 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138207232 unmapped: 37363712 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:47.779168+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb8000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e268fb8000 session 0x55e267f87500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138223616 unmapped: 37347328 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:48.779330+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266798c00 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2154412 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138223616 unmapped: 37347328 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:49.779481+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266884000 session 0x55e268143a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138240000 unmapped: 37330944 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:50.779635+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267dce000 session 0x55e2659acc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26802a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267fb2c00 session 0x55e2683b7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e26802a000 session 0x55e26573d340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138289152 unmapped: 37281792 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:51.779882+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f91a8000/0x0/0x4ffc00000, data 0x1946d54/0x1b44000, compress 0x0/0x0/0x0, omap 0x4f34c, meta 0x4ec0cb4), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138289152 unmapped: 37281792 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266798c00 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266884000 session 0x55e267a71dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267dce000 session 0x55e267f86380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267fb2c00 session 0x55e267f69180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:52.780092+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138289152 unmapped: 37281792 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:53.780321+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e4800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e2685e4800 session 0x55e2667dd6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2151292 data_alloc: 218103808 data_used: 6845785
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138313728 unmapped: 37257216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266884000 session 0x55e2659ada40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e266798c00 session 0x55e2659acc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:54.780470+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 heartbeat osd_stat(store_statfs(0x4f91aa000/0x0/0x4ffc00000, data 0x1946d34/0x1b42000, compress 0x0/0x0/0x0, omap 0x4f34c, meta 0x4ec0cb4), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138313728 unmapped: 37257216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.285158157s of 10.091743469s, submitted: 67
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267dce000 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e267fb2c00 session 0x55e2663968c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e4800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 ms_handle_reset con 0x55e2685e4800 session 0x55e26814ddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:55.780635+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138313728 unmapped: 37257216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:56.780798+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138313728 unmapped: 37257216 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 366 handle_osd_map epochs [366,367], i have 366, src has [1,367]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:57.780932+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 37240832 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:58.781103+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2154279 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 37240832 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:59.781414+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 ms_handle_reset con 0x55e266798c00 session 0x55e266396380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f91a6000/0x0/0x4ffc00000, data 0x194886e/0x1b44000, compress 0x0/0x0/0x0, omap 0x4f8bd, meta 0x4ec0743), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138330112 unmapped: 37240832 heap: 175570944 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:00.781577+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dce000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163577856 unmapped: 32989184 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:01.781857+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 151011328 unmapped: 45555712 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:02.782014+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138428416 unmapped: 58138624 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:03.782183+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2476030 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146898944 unmapped: 49668096 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f59a6000/0x0/0x4ffc00000, data 0x5148932/0x5346000, compress 0x0/0x0/0x0, omap 0x4fb19, meta 0x4ec04e7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1,2])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:04.782369+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 49651712 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:05.782550+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.822368622s of 10.322426796s, submitted: 108
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146915328 unmapped: 49651712 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:06.782711+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142721024 unmapped: 53846016 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f3da6000/0x0/0x4ffc00000, data 0x6d48932/0x6f46000, compress 0x0/0x0/0x0, omap 0x4fb19, meta 0x4ec04e7), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:07.782859+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 138526720 unmapped: 58040320 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:08.782994+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4f3da6000/0x0/0x4ffc00000, data 0x6d48932/0x6f46000, compress 0x0/0x0/0x0, omap 0x4fb19, meta 0x4ec04e7), peers [0,1] op hist [0,0,0,0,0,0,1,2])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2682262 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146956288 unmapped: 49610752 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:09.783142+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142770176 unmapped: 53796864 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:10.783313+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 148111360 unmapped: 48455680 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:11.783575+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144039936 unmapped: 52527104 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:12.783724+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144121856 unmapped: 52445184 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 ms_handle_reset con 0x55e267fb2c00 session 0x55e266397180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:13.783890+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424804 data_alloc: 218103808 data_used: 6845687
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 148455424 unmapped: 48111616 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:14.784074+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4ea9a5000/0x0/0x4ffc00000, data 0x10148942/0x10347000, compress 0x0/0x0/0x0, omap 0x4fc9d, meta 0x4ec0363), peers [0,1] op hist [0,0,0,2,0,0,0,1,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 52166656 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:15.784284+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 1.313388705s of 10.003324509s, submitted: 208
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26835ec00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 ms_handle_reset con 0x55e268014c00 session 0x55e2680ed6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 145727488 unmapped: 50839552 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:16.784462+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 heartbeat osd_stat(store_statfs(0x4e79a5000/0x0/0x4ffc00000, data 0x13148942/0x13347000, compress 0x0/0x0/0x0, omap 0x5002e, meta 0x4ebffd2), peers [0,1] op hist [0,0,0,0,0,1,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 ms_handle_reset con 0x55e266884000 session 0x55e267f2b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 ms_handle_reset con 0x55e267dce000 session 0x55e2683b6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 367 handle_osd_map epochs [367,368], i have 367, src has [1,368]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 368 handle_osd_map epochs [368,368], i have 368, src has [1,368]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 368 ms_handle_reset con 0x55e266884000 session 0x55e26817a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142827520 unmapped: 53739520 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 368 ms_handle_reset con 0x55e266798c00 session 0x55e2667dd500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:17.784618+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142860288 unmapped: 53706752 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:18.807203+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 368 heartbeat osd_stat(store_statfs(0x4e5da1000/0x0/0x4ffc00000, data 0x14d4a47c/0x14f49000, compress 0x0/0x0/0x0, omap 0x50442, meta 0x4ebfbbe), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817500 data_alloc: 218103808 data_used: 6849799
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142860288 unmapped: 53706752 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:19.807369+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 368 handle_osd_map epochs [368,369], i have 368, src has [1,369]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 ms_handle_reset con 0x55e267fb2c00 session 0x55e26818b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 ms_handle_reset con 0x55e26835ec00 session 0x55e2680ed500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 ms_handle_reset con 0x55e268014c00 session 0x55e267fde700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142409728 unmapped: 54157312 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:20.807535+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 heartbeat osd_stat(store_statfs(0x4f5d9b000/0x0/0x4ffc00000, data 0x194c018/0x1b4c000, compress 0x0/0x0/0x0, omap 0x5075c, meta 0x4ebf8a4), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 ms_handle_reset con 0x55e266798c00 session 0x55e2674d7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142409728 unmapped: 54157312 heap: 196567040 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:21.807713+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 369 handle_osd_map epochs [369,370], i have 369, src has [1,370]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26835ec00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 ms_handle_reset con 0x55e26835ec00 session 0x55e2697a7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 ms_handle_reset con 0x55e267fb2c00 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171827200 unmapped: 28942336 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:22.807872+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142475264 unmapped: 58294272 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:23.808139+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2503776 data_alloc: 218103808 data_used: 6849799
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142491648 unmapped: 58277888 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:24.808301+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 ms_handle_reset con 0x55e268fb9800 session 0x55e2680ec000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 ms_handle_reset con 0x55e2685d4400 session 0x55e26674a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155500544 unmapped: 45268992 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:25.809317+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.003605366s of 10.058958054s, submitted: 217
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 142942208 unmapped: 57827328 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 heartbeat osd_stat(store_statfs(0x4f059d000/0x0/0x4ffc00000, data 0xa54dc08/0xa74f000, compress 0x0/0x0/0x0, omap 0x50b79, meta 0x4ebf487), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:26.809616+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 370 handle_osd_map epochs [370,371], i have 370, src has [1,371]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 151412736 unmapped: 49356800 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:27.810137+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 147283968 unmapped: 53485568 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:28.810578+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3441638 data_alloc: 218103808 data_used: 6849799
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 143155200 unmapped: 57614336 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 ms_handle_reset con 0x55e2685d4400 session 0x55e267fde000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:29.810760+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 heartbeat osd_stat(store_statfs(0x4eb598000/0x0/0x4ffc00000, data 0xf54f687/0xf752000, compress 0x0/0x0/0x0, omap 0x50d3e, meta 0x4ebf2c2), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 143212544 unmapped: 57556992 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:30.810915+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 ms_handle_reset con 0x55e266798c00 session 0x55e2674d61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152444928 unmapped: 48324608 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 ms_handle_reset con 0x55e267fb2c00 session 0x55e267fdf6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:31.811144+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 ms_handle_reset con 0x55e2680d9000 session 0x55e267d4a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 ms_handle_reset con 0x55e266884000 session 0x55e2674d6700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144097280 unmapped: 56672256 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:32.811327+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 ms_handle_reset con 0x55e266798c00 session 0x55e267f69a40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144113664 unmapped: 56655872 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:33.811768+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 ms_handle_reset con 0x55e266884000 session 0x55e2674d6000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 heartbeat osd_stat(store_statfs(0x4e7729000/0x0/0x4ffc00000, data 0x133bd277/0x135c1000, compress 0x0/0x0/0x0, omap 0x50f29, meta 0x4ebf0d7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3789752 data_alloc: 218103808 data_used: 6849701
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144113664 unmapped: 56655872 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:34.811964+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 heartbeat osd_stat(store_statfs(0x4e7729000/0x0/0x4ffc00000, data 0x133bd277/0x135c1000, compress 0x0/0x0/0x0, omap 0x50eac, meta 0x4ebf154), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144113664 unmapped: 56655872 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:35.812307+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.066028595s of 10.188112259s, submitted: 125
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 ms_handle_reset con 0x55e2685d4400 session 0x55e26674bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144138240 unmapped: 56631296 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:36.812498+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 ms_handle_reset con 0x55e267fb2c00 session 0x55e26814d880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 372 handle_osd_map epochs [372,373], i have 372, src has [1,373]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26835ec00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 373 ms_handle_reset con 0x55e26835ec00 session 0x55e265b388c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 373 handle_osd_map epochs [373,374], i have 373, src has [1,374]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e268fb9800 session 0x55e26674a540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e2680d9000 session 0x55e2697a7180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144162816 unmapped: 56606720 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:37.812783+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144162816 unmapped: 56606720 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:38.813113+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3799376 data_alloc: 218103808 data_used: 6849701
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144162816 unmapped: 56606720 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:39.813288+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 heartbeat osd_stat(store_statfs(0x4e771f000/0x0/0x4ffc00000, data 0x133c0ac7/0x135c9000, compress 0x0/0x0/0x0, omap 0x51ad2, meta 0x4ebe52e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144162816 unmapped: 56606720 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:40.813462+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e266798c00 session 0x55e2659ac540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e266884000 session 0x55e265a04540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e267fb2c00 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e266798c00 session 0x55e26b761c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 ms_handle_reset con 0x55e266884000 session 0x55e2681b2e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 375 ms_handle_reset con 0x55e267fb2c00 session 0x55e2683bf500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144728064 unmapped: 56041472 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:41.813714+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 375 ms_handle_reset con 0x55e268fb9800 session 0x55e26573cc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144728064 unmapped: 56041472 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:42.813852+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2661d2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 376 ms_handle_reset con 0x55e2661d2c00 session 0x55e2662f8700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 376 ms_handle_reset con 0x55e26cdc2000 session 0x55e26b7601c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144760832 unmapped: 56008704 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:43.813980+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2685d4400 session 0x55e267d4b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3849849 data_alloc: 218103808 data_used: 6849898
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2680d9000 session 0x55e2681b3dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144785408 unmapped: 55984128 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:44.814124+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4e7111000/0x0/0x4ffc00000, data 0x139c8e60/0x13bd7000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e266798c00 session 0x55e2674d6540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4e7111000/0x0/0x4ffc00000, data 0x139c8e60/0x13bd7000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 144007168 unmapped: 56762368 heap: 200769536 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:45.814314+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4e70eb000/0x0/0x4ffc00000, data 0x139f2e60/0x13c01000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.908807755s of 10.092564583s, submitted: 48
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d3c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2685d3c00 session 0x55e266397340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e269e64c00 session 0x55e268143dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 182181888 unmapped: 31186944 heap: 213368832 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:46.814495+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162463744 unmapped: 55107584 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:47.814688+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4e34eb000/0x0/0x4ffc00000, data 0x175f2e60/0x17801000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146161664 unmapped: 71409664 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:48.814873+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4562067 data_alloc: 234881024 data_used: 9894968
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 146243584 unmapped: 71327744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:49.815067+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 154853376 unmapped: 62717952 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:50.815276+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2680d9000 session 0x55e2683b61c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2685d4400 session 0x55e26814cc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e26cdc2000 session 0x55e26817bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fc9c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e268fc9c00 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:51.815463+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 150953984 unmapped: 66617344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:52.815648+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161292288 unmapped: 56279040 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2680d9000 session 0x55e2680ecc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2685d4400 session 0x55e2697a6c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e269e64c00 session 0x55e2681b21c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:53.817548+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e26cdc2000 session 0x55e26674aa80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687be000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2687be000 session 0x55e268142540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 151691264 unmapped: 65880064 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4d832c000/0x0/0x4ffc00000, data 0x227b0e70/0x229c0000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5207147 data_alloc: 234881024 data_used: 9894968
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:54.817984+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 151699456 unmapped: 65871872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2680d9000 session 0x55e2662f8a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:55.818218+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 151879680 unmapped: 65691648 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4d6b2c000/0x0/0x4ffc00000, data 0x23fb0e70/0x241c0000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 2.907626867s of 10.009019852s, submitted: 84
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e2685d4400 session 0x55e268143c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:56.818409+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 69738496 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e269e64c00 session 0x55e267f86a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:57.818768+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 65495040 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:58.818909+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152076288 unmapped: 65495040 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5485911 data_alloc: 234881024 data_used: 9909304
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:59.819146+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 147996672 unmapped: 69574656 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 heartbeat osd_stat(store_statfs(0x4d472c000/0x0/0x4ffc00000, data 0x263b0e70/0x265c0000, compress 0x0/0x0/0x0, omap 0x5250a, meta 0x4ebdaf6), peers [0,1] op hist [0,0,0,0,0,1,7,3,4])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e266798c00 session 0x55e2697a7dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e268fb9800 session 0x55e26573c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 ms_handle_reset con 0x55e26cdc2000 session 0x55e26814c000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:00.819278+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266798c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 65552384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 378 ms_handle_reset con 0x55e268fb9800 session 0x55e2674d7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:01.819403+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 153296896 unmapped: 64274432 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:02.819610+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157425664 unmapped: 60145664 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 378 heartbeat osd_stat(store_statfs(0x4d2dd9000/0x0/0x4ffc00000, data 0x26b599fe/0x26d69000, compress 0x0/0x0/0x0, omap 0x52666, meta 0x605d99a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:03.819731+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 156057600 unmapped: 61513728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 ms_handle_reset con 0x55e2685d4400 session 0x55e267f2afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5601302 data_alloc: 234881024 data_used: 16897412
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:04.819856+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 156057600 unmapped: 61513728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:05.820021+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 156057600 unmapped: 61513728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 ms_handle_reset con 0x55e269e64c00 session 0x55e267a71180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 ms_handle_reset con 0x55e2685d2400 session 0x55e268142700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 heartbeat osd_stat(store_statfs(0x4d2de0000/0x0/0x4ffc00000, data 0x26b5b52a/0x26d6a000, compress 0x0/0x0/0x0, omap 0x52854, meta 0x605d7ac), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:06.820163+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157368320 unmapped: 60203008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.145124435s of 11.117520332s, submitted: 118
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 ms_handle_reset con 0x55e2685d2400 session 0x55e2681b2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 379 handle_osd_map epochs [379,380], i have 379, src has [1,380]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:07.820314+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 60162048 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:08.820447+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157409280 unmapped: 60162048 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 ms_handle_reset con 0x55e2685d4400 session 0x55e267a70380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 ms_handle_reset con 0x55e268fb9800 session 0x55e266396a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5627148 data_alloc: 234881024 data_used: 17400708
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:09.820573+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 ms_handle_reset con 0x55e269e64c00 session 0x55e267a71340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 ms_handle_reset con 0x55e26cdc2000 session 0x55e268143880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 158769152 unmapped: 58802176 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 heartbeat osd_stat(store_statfs(0x4d2a5b000/0x0/0x4ffc00000, data 0x26edeffe/0x270f1000, compress 0x0/0x0/0x0, omap 0x52e38, meta 0x605d1c8), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:10.820705+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 158769152 unmapped: 58802176 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 ms_handle_reset con 0x55e2685d2400 session 0x55e264087c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 heartbeat osd_stat(store_statfs(0x4d2a5b000/0x0/0x4ffc00000, data 0x26edf037/0x270f1000, compress 0x0/0x0/0x0, omap 0x52e38, meta 0x605d1c8), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:11.820860+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157573120 unmapped: 59998208 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 381 ms_handle_reset con 0x55e2685d4400 session 0x55e2662f9dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 381 heartbeat osd_stat(store_statfs(0x4d2a5b000/0x0/0x4ffc00000, data 0x26edf037/0x270f1000, compress 0x0/0x0/0x0, omap 0x52eca, meta 0x605d136), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 381 handle_osd_map epochs [381,382], i have 381, src has [1,382]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e268fb9800 session 0x55e26b7601c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e269e64c00 session 0x55e2667dcc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e26cdc2000 session 0x55e2683bec40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:12.821072+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157646848 unmapped: 59924480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:13.821211+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157646848 unmapped: 59924480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:14.821367+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5672652 data_alloc: 234881024 data_used: 17417677
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157646848 unmapped: 59924480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:15.821678+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 160473088 unmapped: 57098240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:16.821784+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162504704 unmapped: 55066624 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.868557930s of 10.010899544s, submitted: 146
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 heartbeat osd_stat(store_statfs(0x4d1d9d000/0x0/0x4ffc00000, data 0x27b90e66/0x27da7000, compress 0x0/0x0/0x0, omap 0x534c5, meta 0x605cb3b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:17.821872+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 54558720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:18.822113+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 54558720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:19.822297+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5729690 data_alloc: 234881024 data_used: 18590157
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 54558720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:20.822434+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 54558720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 heartbeat osd_stat(store_statfs(0x4d1d19000/0x0/0x4ffc00000, data 0x27c14e66/0x27e2b000, compress 0x0/0x0/0x0, omap 0x534c5, meta 0x605cb3b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:21.822582+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163012608 unmapped: 54558720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:22.822740+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161538048 unmapped: 56033280 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:23.822893+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161538048 unmapped: 56033280 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:24.823033+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e26cdc2000 session 0x55e26814d500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e2685d4400 session 0x55e2662f9c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5759407 data_alloc: 234881024 data_used: 18590669
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e268fb9800 session 0x55e2681b2380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e269e64c00 session 0x55e267a701c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 ms_handle_reset con 0x55e268fcb800 session 0x55e265a05340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161685504 unmapped: 55885824 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:25.823206+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161685504 unmapped: 55885824 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:26.823405+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 heartbeat osd_stat(store_statfs(0x4d16db000/0x0/0x4ffc00000, data 0x2825ae66/0x28471000, compress 0x0/0x0/0x0, omap 0x5366d, meta 0x605c993), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161685504 unmapped: 55885824 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 382 handle_osd_map epochs [382,383], i have 382, src has [1,383]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.801062584s of 10.001128197s, submitted: 47
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 383 ms_handle_reset con 0x55e268fcb800 session 0x55e26b761340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:27.823633+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161693696 unmapped: 55877632 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:28.823841+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161701888 unmapped: 55869440 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 383 heartbeat osd_stat(store_statfs(0x4d16d5000/0x0/0x4ffc00000, data 0x2825ca64/0x28475000, compress 0x0/0x0/0x0, omap 0x53aad, meta 0x605c553), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:29.824017+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5765369 data_alloc: 234881024 data_used: 18606541
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 161701888 unmapped: 55869440 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 383 heartbeat osd_stat(store_statfs(0x4d16d5000/0x0/0x4ffc00000, data 0x2825ca64/0x28475000, compress 0x0/0x0/0x0, omap 0x53aad, meta 0x605c553), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:30.824212+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 384 ms_handle_reset con 0x55e268fb9800 session 0x55e2683bf500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 384 ms_handle_reset con 0x55e2685d4400 session 0x55e2681016c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162103296 unmapped: 55468032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:31.824348+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2667c7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 384 handle_osd_map epochs [384,385], i have 384, src has [1,385]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 385 ms_handle_reset con 0x55e269e64c00 session 0x55e267f87180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162398208 unmapped: 55173120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:32.824502+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 163831808 unmapped: 53739520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 385 ms_handle_reset con 0x55e267da8800 session 0x55e2697a6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:33.824629+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 52535296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:34.824762+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5799721 data_alloc: 234881024 data_used: 20796479
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165036032 unmapped: 52535296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:35.824911+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 385 heartbeat osd_stat(store_statfs(0x4d16a3000/0x0/0x4ffc00000, data 0x2828d1fe/0x284a9000, compress 0x0/0x0/0x0, omap 0x54043, meta 0x605bfbd), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165060608 unmapped: 52510720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 385 heartbeat osd_stat(store_statfs(0x4d16a3000/0x0/0x4ffc00000, data 0x2828d1fe/0x284a9000, compress 0x0/0x0/0x0, omap 0x54043, meta 0x605bfbd), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:36.825093+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165060608 unmapped: 52510720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.037415504s of 10.408497810s, submitted: 20
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:37.825269+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165068800 unmapped: 52502528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 386 ms_handle_reset con 0x55e267da8800 session 0x55e267f2ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:38.825466+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165093376 unmapped: 52477952 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:39.825612+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5803343 data_alloc: 234881024 data_used: 20796479
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165134336 unmapped: 52436992 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 386 ms_handle_reset con 0x55e2685d4400 session 0x55e2674d7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 387 ms_handle_reset con 0x55e268fb9800 session 0x55e26818b500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:40.825759+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165175296 unmapped: 52396032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:41.826297+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 387 ms_handle_reset con 0x55e268fcb800 session 0x55e26573c540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e265be3800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165322752 unmapped: 52248576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 387 heartbeat osd_stat(store_statfs(0x4d1696000/0x0/0x4ffc00000, data 0x2829698c/0x284b4000, compress 0x0/0x0/0x0, omap 0x542eb, meta 0x605bd15), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:42.826448+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165322752 unmapped: 52248576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:43.826596+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165339136 unmapped: 52232192 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:44.826734+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 388 ms_handle_reset con 0x55e265be3800 session 0x55e2667ddc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5825359 data_alloc: 234881024 data_used: 20855773
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 170098688 unmapped: 47472640 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:45.826907+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165543936 unmapped: 52027392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:46.827162+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165543936 unmapped: 52027392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:47.827326+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 4.752138615s of 10.232415199s, submitted: 103
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 167534592 unmapped: 50036736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 388 ms_handle_reset con 0x55e2685d2400 session 0x55e267d4bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 388 ms_handle_reset con 0x55e269e64c00 session 0x55e2662f8a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 388 heartbeat osd_stat(store_statfs(0x4d0ec7000/0x0/0x4ffc00000, data 0x28de9552/0x28c85000, compress 0x0/0x0/0x0, omap 0x54ba9, meta 0x605b457), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:48.827473+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165814272 unmapped: 51757056 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 heartbeat osd_stat(store_statfs(0x4d0e53000/0x0/0x4ffc00000, data 0x28e59fed/0x28cf7000, compress 0x0/0x0/0x0, omap 0x54d35, meta 0x605b2cb), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,2])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e266884000 session 0x55e265b8a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:49.827623+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e267fb2c00 session 0x55e26814c380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5891017 data_alloc: 234881024 data_used: 21011421
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e266798c00 session 0x55e267d4a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e2680d9000 session 0x55e265b8ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162045952 unmapped: 55525376 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e266884000 session 0x55e2681b3180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 ms_handle_reset con 0x55e267fb2c00 session 0x55e2659ac540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:50.845137+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 158097408 unmapped: 59473920 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:51.845282+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159154176 unmapped: 58417152 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:52.845400+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 389 handle_osd_map epochs [389,390], i have 389, src has [1,390]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 58408960 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:53.845560+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:54.845717+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5653115 data_alloc: 234881024 data_used: 10031565
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 390 heartbeat osd_stat(store_statfs(0x4d29a8000/0x0/0x4ffc00000, data 0x2711ba5c/0x26fb9000, compress 0x0/0x0/0x0, omap 0x551ed, meta 0x605ae13), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:55.845917+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 390 ms_handle_reset con 0x55e2685d2400 session 0x55e267f69180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:56.846101+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:57.847216+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e269e64c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 390 ms_handle_reset con 0x55e269e64c00 session 0x55e2659ada40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:58.847341+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.952049732s of 11.817655563s, submitted: 81
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:59.847535+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5650007 data_alloc: 234881024 data_used: 10023275
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 391 ms_handle_reset con 0x55e266884000 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 391 ms_handle_reset con 0x55e267fb2c00 session 0x55e2697a68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:00.847731+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 391 heartbeat osd_stat(store_statfs(0x4d2c12000/0x0/0x4ffc00000, data 0x2709a596/0x26f38000, compress 0x0/0x0/0x0, omap 0x55343, meta 0x605acbd), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159170560 unmapped: 58400768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:01.847879+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 393 ms_handle_reset con 0x55e2680d9000 session 0x55e267d4b340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 58384384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:02.848069+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 58384384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:03.848238+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159186944 unmapped: 58384384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:04.848365+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5657759 data_alloc: 234881024 data_used: 10023275
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 58376192 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:05.848538+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 58376192 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 393 heartbeat osd_stat(store_statfs(0x4d2c09000/0x0/0x4ffc00000, data 0x2709dda0/0x26f3f000, compress 0x0/0x0/0x0, omap 0x558e7, meta 0x605a719), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:06.848633+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159195136 unmapped: 58376192 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 393 ms_handle_reset con 0x55e2685d2400 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 393 ms_handle_reset con 0x55e267da8800 session 0x55e267a71880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:07.848798+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 159162368 unmapped: 58408960 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:08.848992+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 393 handle_osd_map epochs [393,394], i have 393, src has [1,394]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e266884000 session 0x55e268100380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e267fb2c00 session 0x55e268100a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e2680d9000 session 0x55e2667dddc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e2685d2400 session 0x55e2697a76c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e2685d4400 session 0x55e267fde8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 60293120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:09.849129+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5724715 data_alloc: 234881024 data_used: 10039659
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 60293120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:10.849262+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.266725540s of 11.499062538s, submitted: 43
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e26cdc2000 session 0x55e265a04540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e2667c7c00 session 0x55e2681b2e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e26cdc2400 session 0x55e267fde700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 157278208 unmapped: 60293120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e266884000 session 0x55e2697a7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:11.849386+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 61931520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 heartbeat osd_stat(store_statfs(0x4d2666000/0x0/0x4ffc00000, data 0x2764381f/0x274e6000, compress 0x0/0x0/0x0, omap 0x55db5, meta 0x605a24b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:12.849524+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 61931520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:13.849681+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 heartbeat osd_stat(store_statfs(0x4d2666000/0x0/0x4ffc00000, data 0x2764381f/0x274e6000, compress 0x0/0x0/0x0, omap 0x55db5, meta 0x605a24b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 61931520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:14.849825+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5664903 data_alloc: 218103808 data_used: 7706987
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 61931520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 heartbeat osd_stat(store_statfs(0x4d2664000/0x0/0x4ffc00000, data 0x27643892/0x274e8000, compress 0x0/0x0/0x0, omap 0x55db5, meta 0x605a24b), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:15.850000+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d9000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e2680d9000 session 0x55e267fdf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155656192 unmapped: 61915136 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2667c7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:16.850119+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 155656192 unmapped: 61915136 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:17.850239+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 heartbeat osd_stat(store_statfs(0x4d2663000/0x0/0x4ffc00000, data 0x276438b5/0x274e9000, compress 0x0/0x0/0x0, omap 0x55d7d, meta 0x605a283), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 160645120 unmapped: 56926208 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:18.850544+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e267fb2c00 session 0x55e26814c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 160645120 unmapped: 56926208 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:19.850698+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 ms_handle_reset con 0x55e26cdc2400 session 0x55e2683befc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5741800 data_alloc: 234881024 data_used: 20138347
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 56918016 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:20.850847+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.794401169s of 10.001276016s, submitted: 38
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162758656 unmapped: 54812672 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 395 ms_handle_reset con 0x55e2685d2400 session 0x55e2683bfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:21.851049+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 395 heartbeat osd_stat(store_statfs(0x4d2663000/0x0/0x4ffc00000, data 0x276438b5/0x274e9000, compress 0x0/0x0/0x0, omap 0x55d7d, meta 0x605a283), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162545664 unmapped: 55025664 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:22.851199+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162545664 unmapped: 55025664 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 395 handle_osd_map epochs [395,396], i have 395, src has [1,396]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:23.851374+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162545664 unmapped: 55025664 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:24.851732+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5835818 data_alloc: 234881024 data_used: 20138347
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:25.851915+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:26.852068+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 396 heartbeat osd_stat(store_statfs(0x4d1fd9000/0x0/0x4ffc00000, data 0x28254fed/0x27b71000, compress 0x0/0x0/0x0, omap 0x5689e, meta 0x6059762), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:27.852252+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:28.852442+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 396 heartbeat osd_stat(store_statfs(0x4d1fd9000/0x0/0x4ffc00000, data 0x28254fed/0x27b71000, compress 0x0/0x0/0x0, omap 0x5689e, meta 0x6059762), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:29.852628+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5836202 data_alloc: 234881024 data_used: 20150635
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 162619392 unmapped: 54951936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 396 ms_handle_reset con 0x55e268fb9800 session 0x55e267f87500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:30.852770+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 165986304 unmapped: 51585024 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:31.852892+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.398292542s of 10.576494217s, submitted: 54
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 396 ms_handle_reset con 0x55e26cdc2000 session 0x55e26573c700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 169246720 unmapped: 48324608 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:32.853120+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 ms_handle_reset con 0x55e267fb2c00 session 0x55e2667dd880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 170557440 unmapped: 47013888 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 heartbeat osd_stat(store_statfs(0x4d1765000/0x0/0x4ffc00000, data 0x28acafed/0x283e7000, compress 0x0/0x0/0x0, omap 0x5689e, meta 0x6059762), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:34.201005+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 170704896 unmapped: 46866432 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 ms_handle_reset con 0x55e2685d2400 session 0x55e268142540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:35.201198+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5906408 data_alloc: 234881024 data_used: 21907835
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 ms_handle_reset con 0x55e268fb9800 session 0x55e267f68380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 169885696 unmapped: 47685632 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:36.201385+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178405376 unmapped: 39165952 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:37.201515+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178479104 unmapped: 39092224 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:38.201655+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 heartbeat osd_stat(store_statfs(0x4ceb54000/0x0/0x4ffc00000, data 0x2b6dbb7a/0x2aff8000, compress 0x0/0x0/0x0, omap 0x56cfd, meta 0x6059303), peers [0,1] op hist [0,0,0,0,0,1,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171139072 unmapped: 46432256 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 ms_handle_reset con 0x55e26cdc2400 session 0x55e2674d6fc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dab000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 heartbeat osd_stat(store_statfs(0x4ceb54000/0x0/0x4ffc00000, data 0x2b6dbb7a/0x2aff8000, compress 0x0/0x0/0x0, omap 0x56cfd, meta 0x6059303), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 ms_handle_reset con 0x55e267dab000 session 0x55e2659acc40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:39.201795+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171139072 unmapped: 46432256 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 398 handle_osd_map epochs [398,399], i have 398, src has [1,399]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:40.201928+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6107914 data_alloc: 234881024 data_used: 21907736
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 heartbeat osd_stat(store_statfs(0x4ceb4b000/0x0/0x4ffc00000, data 0x2b1531a3/0x2affd000, compress 0x0/0x0/0x0, omap 0x57230, meta 0x6058dd0), peers [0,1] op hist [0,0,0,0,1,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:41.207976+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 ms_handle_reset con 0x55e268fcb800 session 0x55e2680ed500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 ms_handle_reset con 0x55e267fb2c00 session 0x55e267fde700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:42.208129+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.421155930s of 10.885502815s, submitted: 144
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 ms_handle_reset con 0x55e2685d2400 session 0x55e2681b2e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:43.208263+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:44.208422+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:45.208532+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 ms_handle_reset con 0x55e268fb9800 session 0x55e2697a76c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6028602 data_alloc: 234881024 data_used: 21907736
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171204608 unmapped: 46366720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:46.208670+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 heartbeat osd_stat(store_statfs(0x4cf759000/0x0/0x4ffc00000, data 0x2a544b72/0x2a3ef000, compress 0x0/0x0/0x0, omap 0x572c6, meta 0x6058d3a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 399 handle_osd_map epochs [400,400], i have 400, src has [1,400]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 46358528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcc400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:47.208852+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 46358528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:48.209031+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 400 ms_handle_reset con 0x55e267dcc400 session 0x55e2683b6380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 400 ms_handle_reset con 0x55e26cdc2400 session 0x55e2674d68c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 46358528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:49.209202+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171212800 unmapped: 46358528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:50.209383+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6151352 data_alloc: 234881024 data_used: 21915928
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188096512 unmapped: 29474816 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:51.209573+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171327488 unmapped: 46243840 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:52.209776+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 heartbeat osd_stat(store_statfs(0x4ce354000/0x0/0x4ffc00000, data 0x2b9481b9/0x2b7f6000, compress 0x0/0x0/0x0, omap 0x57980, meta 0x6058680), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171335680 unmapped: 46235648 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 ms_handle_reset con 0x55e268fb9800 session 0x55e267f2ba40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:53.209984+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 heartbeat osd_stat(store_statfs(0x4ce354000/0x0/0x4ffc00000, data 0x2b9481b9/0x2b7f6000, compress 0x0/0x0/0x0, omap 0x57980, meta 0x6058680), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171335680 unmapped: 46235648 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.334987640s of 10.924365044s, submitted: 19
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:54.210142+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171343872 unmapped: 46227456 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 ms_handle_reset con 0x55e268fcb800 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:55.210272+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6163398 data_alloc: 234881024 data_used: 21915928
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 heartbeat osd_stat(store_statfs(0x4cdf56000/0x0/0x4ffc00000, data 0x2bd481b9/0x2bbf6000, compress 0x0/0x0/0x0, omap 0x57980, meta 0x6058680), peers [0,1] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 175546368 unmapped: 42024960 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266469000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 ms_handle_reset con 0x55e266469000 session 0x55e2680ed6c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da9400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:56.210507+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 175587328 unmapped: 41984000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:57.210686+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179896320 unmapped: 37675008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:58.210806+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171540480 unmapped: 46030848 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 402 ms_handle_reset con 0x55e267da9400 session 0x55e2697a7dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:59.210978+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266469000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 45735936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:00.211125+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 403 heartbeat osd_stat(store_statfs(0x4c9752000/0x0/0x4ffc00000, data 0x30549d78/0x303fa000, compress 0x0/0x0/0x0, omap 0x57de6, meta 0x605821a), peers [0,1] op hist [0,0,0,0,0,0,1,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6616955 data_alloc: 234881024 data_used: 21915928
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 180379648 unmapped: 37191680 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:01.211248+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 403 ms_handle_reset con 0x55e268fcb800 session 0x55e267f876c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 176193536 unmapped: 41377792 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:02.211423+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 172064768 unmapped: 45506560 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:03.211552+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179200000 unmapped: 38371328 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 5.108457088s of 10.101754189s, submitted: 36
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:04.211681+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 404 heartbeat osd_stat(store_statfs(0x4c774a000/0x0/0x4ffc00000, data 0x3254d504/0x32400000, compress 0x0/0x0/0x0, omap 0x5808e, meta 0x6057f72), peers [0,1] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179380224 unmapped: 38191104 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:05.211820+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6886197 data_alloc: 251658240 data_used: 29326104
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 404 ms_handle_reset con 0x55e26cdc2400 session 0x55e26814c380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179511296 unmapped: 38060032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:06.211965+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 404 handle_osd_map epochs [404,405], i have 404, src has [1,405]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 405 ms_handle_reset con 0x55e2685d5c00 session 0x55e267fdfa40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 405 heartbeat osd_stat(store_statfs(0x4c574d000/0x0/0x4ffc00000, data 0x3454d4a2/0x343ff000, compress 0x0/0x0/0x0, omap 0x5808e, meta 0x6057f72), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179601408 unmapped: 37969920 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:07.212073+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 405 heartbeat osd_stat(store_statfs(0x4c434a000/0x0/0x4ffc00000, data 0x3594f0a2/0x35802000, compress 0x0/0x0/0x0, omap 0x584f8, meta 0x6057b08), peers [0,1] op hist [0,0,0,0,0,1])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 405 ms_handle_reset con 0x55e266833400 session 0x55e267fdf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179871744 unmapped: 37699584 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:08.212379+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 405 handle_osd_map epochs [405,406], i have 405, src has [1,406]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 181059584 unmapped: 36511744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:09.212530+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2685d2400 session 0x55e2681b3180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e266833400 session 0x55e26814c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e267fb2c00 session 0x55e2659ada40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 36323328 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2685d2400 session 0x55e265b8a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2685d5c00 session 0x55e267d4a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:10.212664+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7386455 data_alloc: 251658240 data_used: 29326104
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 heartbeat osd_stat(store_statfs(0x4c0745000/0x0/0x4ffc00000, data 0x3955117b/0x39405000, compress 0x0/0x0/0x0, omap 0x5864d, meta 0x60579b3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 181346304 unmapped: 36225024 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:11.212799+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 181444608 unmapped: 36126720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2667c7c00 session 0x55e26818ac40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e266884000 session 0x55e2683b7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:12.213134+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e268fcb800 session 0x55e2683bfdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e266833400 session 0x55e2683bf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267fb2c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2685d2400 session 0x55e267f86e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 172818432 unmapped: 44752896 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e267fb2c00 session 0x55e26b760000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:13.213279+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 ms_handle_reset con 0x55e2685d5c00 session 0x55e26674b180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 172851200 unmapped: 44720128 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:14.213400+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.627478123s of 10.748839378s, submitted: 170
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 38404096 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 408 ms_handle_reset con 0x55e266833400 session 0x55e2697a7340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:15.213520+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 408 ms_handle_reset con 0x55e266884000 session 0x55e267f861c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5986507 data_alloc: 234881024 data_used: 15985397
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179167232 unmapped: 38404096 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 408 heartbeat osd_stat(store_statfs(0x4d0a98000/0x0/0x4ffc00000, data 0x290dd32a/0x28f92000, compress 0x0/0x0/0x0, omap 0x58feb, meta 0x6057015), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:16.213733+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 181919744 unmapped: 35651584 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 408 heartbeat osd_stat(store_statfs(0x4d0a99000/0x0/0x4ffc00000, data 0x290dd31a/0x28f91000, compress 0x0/0x0/0x0, omap 0x58feb, meta 0x6057015), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:17.213864+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 409 ms_handle_reset con 0x55e268fcb800 session 0x55e265b8afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178208768 unmapped: 39362560 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:18.213963+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 410 ms_handle_reset con 0x55e2685d2400 session 0x55e2697a7180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 410 ms_handle_reset con 0x55e266833400 session 0x55e265a7fc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178249728 unmapped: 39321600 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:19.214107+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 411 ms_handle_reset con 0x55e266884000 session 0x55e2663968c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179306496 unmapped: 38264832 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:20.214229+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 412 ms_handle_reset con 0x55e2685d5c00 session 0x55e268100000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5950661 data_alloc: 234881024 data_used: 15166371
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 412 ms_handle_reset con 0x55e268fcb800 session 0x55e2681b2e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179331072 unmapped: 38240256 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e2685d2400 session 0x55e267f87340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:21.214684+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e266833400 session 0x55e26674bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e266884000 session 0x55e26b761c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 heartbeat osd_stat(store_statfs(0x4d35eb000/0x0/0x4ffc00000, data 0x25f22f3a/0x2615f000, compress 0x0/0x0/0x0, omap 0x5a1da, meta 0x6055e26), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179429376 unmapped: 38141952 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e2685d5c00 session 0x55e26674a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:22.214997+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179666944 unmapped: 37904384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e26cdc2400 session 0x55e26818afc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 ms_handle_reset con 0x55e266469000 session 0x55e266397dc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e268fb9800 session 0x55e2674d7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e267dcd000 session 0x55e265b38000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:23.215172+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e268fcb800 session 0x55e267f2a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e266833400 session 0x55e267f2a000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e266884000 session 0x55e26818a380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 39993344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:24.215326+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.038788795s of 10.047517776s, submitted: 495
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 ms_handle_reset con 0x55e266833400 session 0x55e265a7e8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 39993344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:25.215478+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3052496 data_alloc: 234881024 data_used: 15176402
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178626560 unmapped: 38944768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:26.215655+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178626560 unmapped: 38944768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:27.215785+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 416 ms_handle_reset con 0x55e266884000 session 0x55e2667dc1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 416 heartbeat osd_stat(store_statfs(0x4f49e8000/0x0/0x4ffc00000, data 0x4f265c0/0x5161000, compress 0x0/0x0/0x0, omap 0x5ab5d, meta 0x60554a3), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:28.215962+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 416 ms_handle_reset con 0x55e267dcd000 session 0x55e2659ad180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:29.216101+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:30.216221+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e268fb9800 session 0x55e26b7608c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3064706 data_alloc: 234881024 data_used: 15180751
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f49dc000/0x0/0x4ffc00000, data 0x4f2b88f/0x516c000, compress 0x0/0x0/0x0, omap 0x5b409, meta 0x6054bf7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fcb800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e268fcb800 session 0x55e2659acfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:31.216451+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e266833400 session 0x55e2667dc540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e266884000 session 0x55e2667dda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:32.216629+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 heartbeat osd_stat(store_statfs(0x4f49db000/0x0/0x4ffc00000, data 0x4f2b8e1/0x516c000, compress 0x0/0x0/0x0, omap 0x5b409, meta 0x6054bf7), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e267dcd000 session 0x55e26818bdc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:33.216753+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 ms_handle_reset con 0x55e268fb9800 session 0x55e26818a700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178634752 unmapped: 38936576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 418 handle_osd_map epochs [418,419], i have 418, src has [1,419]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:34.216890+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 38912000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:35.217016+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3066826 data_alloc: 234881024 data_used: 15180751
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 38912000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:36.217173+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 419 ms_handle_reset con 0x55e2685d5c00 session 0x55e2683b7c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 419 ms_handle_reset con 0x55e266833400 session 0x55e26b760a80
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 38912000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:37.217305+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178659328 unmapped: 38912000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:38.217481+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 419 heartbeat osd_stat(store_statfs(0x4f49db000/0x0/0x4ffc00000, data 0x4f2d4d1/0x516f000, compress 0x0/0x0/0x0, omap 0x5b896, meta 0x605476a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.368652344s of 13.731964111s, submitted: 42
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 38903808 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:39.217651+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 38903808 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:40.217807+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3069600 data_alloc: 234881024 data_used: 15180751
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 420 ms_handle_reset con 0x55e266884000 session 0x55e26818a1c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178667520 unmapped: 38903808 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:41.217985+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 420 ms_handle_reset con 0x55e267dcd000 session 0x55e2667dd500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:42.218115+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f49d9000/0x0/0x4ffc00000, data 0x4f2f05f/0x5171000, compress 0x0/0x0/0x0, omap 0x5b9e8, meta 0x6054618), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:43.218495+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:44.220549+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:45.221300+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 420 heartbeat osd_stat(store_statfs(0x4f49b7000/0x0/0x4ffc00000, data 0x4f5305f/0x5195000, compress 0x0/0x0/0x0, omap 0x5bc8b, meta 0x6054375), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3078178 data_alloc: 234881024 data_used: 15316959
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:46.221453+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:47.221691+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 421 heartbeat osd_stat(store_statfs(0x4f49b2000/0x0/0x4ffc00000, data 0x4f54ade/0x5198000, compress 0x0/0x0/0x0, omap 0x5be14, meta 0x60541ec), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:48.221900+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:49.222103+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:50.222256+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3078306 data_alloc: 234881024 data_used: 15338463
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:51.222411+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.758706093s of 13.789759636s, submitted: 35
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 421 ms_handle_reset con 0x55e26cdc2400 session 0x55e2683bf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:52.222608+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 422 ms_handle_reset con 0x55e267da8c00 session 0x55e26814c8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 422 heartbeat osd_stat(store_statfs(0x4f49b2000/0x0/0x4ffc00000, data 0x4f54ade/0x5198000, compress 0x0/0x0/0x0, omap 0x5beaa, meta 0x6054156), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:53.222734+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 422 handle_osd_map epochs [422,423], i have 422, src has [1,423]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 423 ms_handle_reset con 0x55e266889000 session 0x55e267f86c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 423 heartbeat osd_stat(store_statfs(0x4f49ac000/0x0/0x4ffc00000, data 0x4f58216/0x519e000, compress 0x0/0x0/0x0, omap 0x5c524, meta 0x6053adc), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 178978816 unmapped: 38592512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:54.222859+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 424 ms_handle_reset con 0x55e267da8c00 session 0x55e2667dcfc0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 424 ms_handle_reset con 0x55e266833400 session 0x55e26814d340
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179027968 unmapped: 38543360 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:55.222982+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 425 ms_handle_reset con 0x55e266884000 session 0x55e2683b6e00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3095703 data_alloc: 234881024 data_used: 15338463
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 179027968 unmapped: 38543360 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:56.223134+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e267dcd000 session 0x55e267f68700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e266833400 session 0x55e267f2bc00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 heartbeat osd_stat(store_statfs(0x4f499f000/0x0/0x4ffc00000, data 0x4f5b9e3/0x51a7000, compress 0x0/0x0/0x0, omap 0x5cba2, meta 0x605345e), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 184754176 unmapped: 32817152 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e266884000 session 0x55e2683b7880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:57.223262+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e267da8c00 session 0x55e2683be540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e266889000 session 0x55e268100c40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e26cdc2400 session 0x55e2683bf180
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 ms_handle_reset con 0x55e26cdc2400 session 0x55e2667dda40
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 184754176 unmapped: 32817152 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:58.223363+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 heartbeat osd_stat(store_statfs(0x4f499d000/0x0/0x4ffc00000, data 0x4f5dac6/0x51ad000, compress 0x0/0x0/0x0, omap 0x5ccf6, meta 0x605330a), peers [0,1] op hist [])
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 ms_handle_reset con 0x55e267dcd000 session 0x55e2659ac540
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 ms_handle_reset con 0x55e266833400 session 0x55e26b7608c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 184778752 unmapped: 32792576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:59.223528+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 ms_handle_reset con 0x55e266884000 session 0x55e2697a7500
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 ms_handle_reset con 0x55e266889000 session 0x55e265a7e8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 184778752 unmapped: 32792576 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 ms_handle_reset con 0x55e266833400 session 0x55e267f2a8c0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:00.223707+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 427 handle_osd_map epochs [427,428], i have 427, src has [1,428]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 428 ms_handle_reset con 0x55e267dcd000 session 0x55e267fde700
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 428 ms_handle_reset con 0x55e266884000 session 0x55e26814d880
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:57 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:57 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3140956 data_alloc: 234881024 data_used: 23988719
Dec 13 04:35:57 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 184786944 unmapped: 32784384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:01.223880+0000)
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:57 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 429 ms_handle_reset con 0x55e26cdc2400 session 0x55e268100380
Dec 13 04:35:57 compute-0 ceph-osd[87731]: osd.2 429 ms_handle_reset con 0x55e266889000 session 0x55e26674a380
Dec 13 04:35:57 compute-0 nova_compute[243704]: 2025-12-13 04:35:57.994 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 429 heartbeat osd_stat(store_statfs(0x4f499b000/0x0/0x4ffc00000, data 0x4f60e5b/0x51af000, compress 0x0/0x0/0x0, omap 0x5d4a5, meta 0x6052b5b), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 30425088 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:02.224093+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.449741364s of 10.763992310s, submitted: 70
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 30425088 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:03.224302+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 429 handle_osd_map epochs [429,430], i have 429, src has [1,430]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 ms_handle_reset con 0x55e266833400 session 0x55e267f86e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 ms_handle_reset con 0x55e266884000 session 0x55e268142a80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 ms_handle_reset con 0x55e266889000 session 0x55e267f68000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187170816 unmapped: 30400512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:04.224680+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 ms_handle_reset con 0x55e267dcd000 session 0x55e2697a7180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 heartbeat osd_stat(store_statfs(0x4f4997000/0x0/0x4ffc00000, data 0x4f647c4/0x51b3000, compress 0x0/0x0/0x0, omap 0x5db2c, meta 0x60524d4), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187170816 unmapped: 30400512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:05.224858+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 431 ms_handle_reset con 0x55e267da8c00 session 0x55e268100540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3162848 data_alloc: 251658240 data_used: 28183007
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187170816 unmapped: 30400512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:06.225118+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 432 heartbeat osd_stat(store_statfs(0x4f4991000/0x0/0x4ffc00000, data 0x4f66319/0x51b9000, compress 0x0/0x0/0x0, omap 0x5e2a2, meta 0x6051d5e), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 432 ms_handle_reset con 0x55e266884000 session 0x55e2697a68c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187170816 unmapped: 30400512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:07.225325+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 432 ms_handle_reset con 0x55e266889000 session 0x55e26573cfc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 432 handle_osd_map epochs [432,433], i have 432, src has [1,433]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e266833400 session 0x55e267fdee00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e267da8c00 session 0x55e267d4b500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e26cdc2400 session 0x55e2674d6380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e267dcd000 session 0x55e26817b500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e26cdc2400 session 0x55e26674a8c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187203584 unmapped: 30367744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:08.225626+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187203584 unmapped: 30367744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:09.225786+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 heartbeat osd_stat(store_statfs(0x4f4985000/0x0/0x4ffc00000, data 0x4f6a050/0x51c1000, compress 0x0/0x0/0x0, omap 0x5ecfc, meta 0x6051304), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 ms_handle_reset con 0x55e266833400 session 0x55e265a7e700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187203584 unmapped: 30367744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:10.225936+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 433 handle_osd_map epochs [433,434], i have 433, src has [1,434]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3175091 data_alloc: 251658240 data_used: 28184005
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187219968 unmapped: 30351360 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:11.226148+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 435 ms_handle_reset con 0x55e266884000 session 0x55e2659acc40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 435 ms_handle_reset con 0x55e266889000 session 0x55e26818ac40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:12.226305+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:13.226456+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:14.226613+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:15.226753+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 435 heartbeat osd_stat(store_statfs(0x4f4984000/0x0/0x4ffc00000, data 0x4f6d703/0x51c6000, compress 0x0/0x0/0x0, omap 0x5f345, meta 0x6050cbb), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3176649 data_alloc: 251658240 data_used: 28184261
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:16.226930+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.619927406s of 13.839138985s, submitted: 120
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 436 ms_handle_reset con 0x55e266889000 session 0x55e26817bc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:17.227191+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 437 ms_handle_reset con 0x55e267dcd000 session 0x55e267d4a8c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:18.227421+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:19.227611+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 438 ms_handle_reset con 0x55e266884000 session 0x55e26818a540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc2400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 438 ms_handle_reset con 0x55e266833400 session 0x55e2681b28c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 438 ms_handle_reset con 0x55e26cdc2400 session 0x55e265b8afc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187228160 unmapped: 30343168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.6 total, 600.0 interval
                                           Cumulative writes: 20K writes, 84K keys, 20K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s
                                           Cumulative WAL: 20K writes, 7076 syncs, 2.88 writes per sync, written: 0.06 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 10K writes, 44K keys, 10K commit groups, 1.0 writes per commit group, ingest: 29.45 MB, 0.05 MB/s
                                           Interval WAL: 10K writes, 4189 syncs, 2.43 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:20.227733+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 438 heartbeat osd_stat(store_statfs(0x4f4978000/0x0/0x4ffc00000, data 0x4f729a8/0x51d0000, compress 0x0/0x0/0x0, omap 0x60018, meta 0x604ffe8), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3188732 data_alloc: 251658240 data_used: 28184261
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 438 handle_osd_map epochs [438,439], i have 438, src has [1,439]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187236352 unmapped: 30334976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:21.227884+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 439 handle_osd_map epochs [439,440], i have 439, src has [1,440]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 ms_handle_reset con 0x55e266833400 session 0x55e267fde380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:22.228085+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187260928 unmapped: 30310400 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 ms_handle_reset con 0x55e266884000 session 0x55e2659ad180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 heartbeat osd_stat(store_statfs(0x4f4973000/0x0/0x4ffc00000, data 0x4f75fb5/0x51d5000, compress 0x0/0x0/0x0, omap 0x608db, meta 0x604f725), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:23.228276+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187260928 unmapped: 30310400 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 ms_handle_reset con 0x55e2685d5c00 session 0x55e2680ec000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:24.228527+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187195392 unmapped: 30375936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 ms_handle_reset con 0x55e268fb9800 session 0x55e268142700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267dcd000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 ms_handle_reset con 0x55e267dcd000 session 0x55e268100000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 ms_handle_reset con 0x55e266889000 session 0x55e267f69180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:25.228725+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187236352 unmapped: 30334976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 heartbeat osd_stat(store_statfs(0x4f4976000/0x0/0x4ffc00000, data 0x4f52017/0x51b2000, compress 0x0/0x0/0x0, omap 0x608db, meta 0x604f725), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 ms_handle_reset con 0x55e266884000 session 0x55e267fdf880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 ms_handle_reset con 0x55e266833400 session 0x55e26818a380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 ms_handle_reset con 0x55e268fb9800 session 0x55e267f86380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3193149 data_alloc: 251658240 data_used: 28383600
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 ms_handle_reset con 0x55e267da8c00 session 0x55e2697a6e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:26.228905+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 186974208 unmapped: 30597120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 ms_handle_reset con 0x55e2685d5c00 session 0x55e2683be540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.603461266s of 10.023947716s, submitted: 79
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 ms_handle_reset con 0x55e266884000 session 0x55e2697a6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 ms_handle_reset con 0x55e266833400 session 0x55e267a70380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f498f000/0x0/0x4ffc00000, data 0x4f557e9/0x51b9000, compress 0x0/0x0/0x0, omap 0x60ab8, meta 0x604f548), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:27.229182+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 30547968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 heartbeat osd_stat(store_statfs(0x4f498f000/0x0/0x4ffc00000, data 0x4f557e9/0x51b9000, compress 0x0/0x0/0x0, omap 0x60ab8, meta 0x604f548), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:28.229354+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 30547968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 442 handle_osd_map epochs [443,444], i have 442, src has [1,444]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268fb9800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:29.229512+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187056128 unmapped: 30515200 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e266889000 session 0x55e267f87dc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e268fb9800 session 0x55e2680ed500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e266833400 session 0x55e2659ace00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:30.229666+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187072512 unmapped: 30498816 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e266884000 session 0x55e267f2bc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e266889000 session 0x55e2663968c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 ms_handle_reset con 0x55e2685d5c00 session 0x55e268143880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26cdc5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3201070 data_alloc: 251658240 data_used: 28383666
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 445 heartbeat osd_stat(store_statfs(0x4f498e000/0x0/0x4ffc00000, data 0x4f58fa1/0x51be000, compress 0x0/0x0/0x0, omap 0x60ff8, meta 0x604f008), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 445 ms_handle_reset con 0x55e26cdc5c00 session 0x55e267fdf500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:31.229843+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187146240 unmapped: 30425088 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 445 ms_handle_reset con 0x55e266833400 session 0x55e267a71180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 445 ms_handle_reset con 0x55e266889000 session 0x55e267f86fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:32.229983+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 30556160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 446 ms_handle_reset con 0x55e266884000 session 0x55e2683bee00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 446 ms_handle_reset con 0x55e2685d5c00 session 0x55e2683befc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687bec00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 446 ms_handle_reset con 0x55e2687bec00 session 0x55e2683b61c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 446 ms_handle_reset con 0x55e266833400 session 0x55e2681b2a80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 446 handle_osd_map epochs [446,447], i have 446, src has [1,447]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:33.230154+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 30556160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 447 ms_handle_reset con 0x55e266884000 session 0x55e2683b6380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 447 ms_handle_reset con 0x55e266889000 session 0x55e267fdfdc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:34.230368+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 30556160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:35.230523+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 30556160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 448 ms_handle_reset con 0x55e2685d5c00 session 0x55e2683befc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687bec00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 448 ms_handle_reset con 0x55e2687bec00 session 0x55e267fdf500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 448 heartbeat osd_stat(store_statfs(0x4f4985000/0x0/0x4ffc00000, data 0x4f5fa90/0x51c5000, compress 0x0/0x0/0x0, omap 0x61c9e, meta 0x604e362), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3209323 data_alloc: 251658240 data_used: 28385505
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:36.230712+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187015168 unmapped: 30556160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 448 ms_handle_reset con 0x55e266884000 session 0x55e2683b6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 449 ms_handle_reset con 0x55e2685d5c00 session 0x55e267f2ae00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.808993340s of 10.033959389s, submitted: 138
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:37.230892+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187031552 unmapped: 30539776 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f4986000/0x0/0x4ffc00000, data 0x4f5fa2e/0x51c4000, compress 0x0/0x0/0x0, omap 0x61c9e, meta 0x604e362), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 ms_handle_reset con 0x55e266889000 session 0x55e267f2a700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 ms_handle_reset con 0x55e266833400 session 0x55e26818b6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:38.231193+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187047936 unmapped: 30523392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:39.231380+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187047936 unmapped: 30523392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d3c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:40.231508+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 ms_handle_reset con 0x55e2685d3c00 session 0x55e26818a380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187047936 unmapped: 30523392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3214317 data_alloc: 251658240 data_used: 28385309
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:41.231704+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187047936 unmapped: 30523392 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 heartbeat osd_stat(store_statfs(0x4f497d000/0x0/0x4ffc00000, data 0x4f63234/0x51c9000, compress 0x0/0x0/0x0, omap 0x622be, meta 0x604dd42), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 451 ms_handle_reset con 0x55e266833400 session 0x55e2697a6380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 451 handle_osd_map epochs [451,452], i have 451, src has [1,452]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:42.231891+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 30507008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:43.232013+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 30507008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 452 heartbeat osd_stat(store_statfs(0x4f4977000/0x0/0x4ffc00000, data 0x4f66971/0x51d1000, compress 0x0/0x0/0x0, omap 0x62c9d, meta 0x604d363), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 452 ms_handle_reset con 0x55e266889000 session 0x55e268142e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:44.232105+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 30507008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d3c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 453 ms_handle_reset con 0x55e2685d3c00 session 0x55e26818b180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:45.232209+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 187064320 unmapped: 30507008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 453 heartbeat osd_stat(store_statfs(0x4f4975000/0x0/0x4ffc00000, data 0x4f669e3/0x51d3000, compress 0x0/0x0/0x0, omap 0x62d33, meta 0x604d2cd), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26802b000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 453 ms_handle_reset con 0x55e26802b000 session 0x55e2697a7340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3232139 data_alloc: 251658240 data_used: 28385537
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 454 ms_handle_reset con 0x55e2685d5c00 session 0x55e26817a380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:46.232360+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 454 ms_handle_reset con 0x55e266884000 session 0x55e2697a6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 29442048 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 454 ms_handle_reset con 0x55e2685d5c00 session 0x55e26573cc40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:47.232484+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188129280 unmapped: 29442048 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.532466888s of 10.639801979s, submitted: 104
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 454 heartbeat osd_stat(store_statfs(0x4f496e000/0x0/0x4ffc00000, data 0x4f6a19f/0x51d9000, compress 0x0/0x0/0x0, omap 0x63639, meta 0x604c9c7), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 454 handle_osd_map epochs [454,455], i have 454, src has [1,455]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 455 ms_handle_reset con 0x55e266889000 session 0x55e268142700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:48.232595+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 455 ms_handle_reset con 0x55e266833400 session 0x55e26573ca80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188153856 unmapped: 29417472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e26802b000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 455 handle_osd_map epochs [455,456], i have 455, src has [1,456]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 456 ms_handle_reset con 0x55e26802b000 session 0x55e26674aa80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:49.232727+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 456 ms_handle_reset con 0x55e266833400 session 0x55e26674b180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188186624 unmapped: 29384704 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 456 ms_handle_reset con 0x55e266884000 session 0x55e2683b6c40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:50.232838+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 457 ms_handle_reset con 0x55e266889000 session 0x55e2659acfc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188186624 unmapped: 29384704 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 457 ms_handle_reset con 0x55e2685d5c00 session 0x55e267d4a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d3c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3238906 data_alloc: 251658240 data_used: 28385894
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 457 handle_osd_map epochs [457,458], i have 457, src has [1,458]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 ms_handle_reset con 0x55e2685d3c00 session 0x55e2681b2e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:51.232976+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188194816 unmapped: 29376512 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 ms_handle_reset con 0x55e266833400 session 0x55e2683b7500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:52.233184+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188211200 unmapped: 29360128 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 ms_handle_reset con 0x55e266884000 session 0x55e267d4ba40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 heartbeat osd_stat(store_statfs(0x4f4969000/0x0/0x4ffc00000, data 0x4f710cf/0x51e1000, compress 0x0/0x0/0x0, omap 0x64448, meta 0x604bbb8), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:53.233297+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188211200 unmapped: 29360128 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:54.233424+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188211200 unmapped: 29360128 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 ms_handle_reset con 0x55e266889000 session 0x55e268100700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:55.233549+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188211200 unmapped: 29360128 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 458 handle_osd_map epochs [458,459], i have 458, src has [1,459]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3246317 data_alloc: 251658240 data_used: 28387120
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:56.233706+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188219392 unmapped: 29351936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 459 ms_handle_reset con 0x55e2685d5c00 session 0x55e26573d500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c41c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:57.233803+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189300736 unmapped: 28270592 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 460 handle_osd_map epochs [460,461], i have 460, src has [1,461]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.417004585s of 10.629422188s, submitted: 124
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dbc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 461 ms_handle_reset con 0x55e2685dbc00 session 0x55e267f688c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:58.233905+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189317120 unmapped: 28254208 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 461 heartbeat osd_stat(store_statfs(0x4f4961000/0x0/0x4ffc00000, data 0x4f74794/0x51e9000, compress 0x0/0x0/0x0, omap 0x64c42, meta 0x604b3be), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 461 handle_osd_map epochs [461,462], i have 461, src has [1,462]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 ms_handle_reset con 0x55e268014000 session 0x55e26814d340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:59.234031+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189325312 unmapped: 28246016 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 ms_handle_reset con 0x55e266833400 session 0x55e2667dd340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 ms_handle_reset con 0x55e267c41c00 session 0x55e2663961c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 ms_handle_reset con 0x55e266884000 session 0x55e266397c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f495c000/0x0/0x4ffc00000, data 0x4f7634c/0x51ec000, compress 0x0/0x0/0x0, omap 0x64d8c, meta 0x604b274), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:00.234186+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189358080 unmapped: 28213248 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3262522 data_alloc: 251658240 data_used: 28387490
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:01.234307+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 heartbeat osd_stat(store_statfs(0x4f4955000/0x0/0x4ffc00000, data 0x4f7845a/0x51f1000, compress 0x0/0x0/0x0, omap 0x65604, meta 0x604a9fc), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189358080 unmapped: 28213248 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:02.234503+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189358080 unmapped: 28213248 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d5c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 463 ms_handle_reset con 0x55e2685d5c00 session 0x55e267f87340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:03.234625+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189358080 unmapped: 28213248 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 463 heartbeat osd_stat(store_statfs(0x4f4956000/0x0/0x4ffc00000, data 0x4f79ed9/0x51f4000, compress 0x0/0x0/0x0, omap 0x65787, meta 0x604a879), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:04.234760+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189358080 unmapped: 28213248 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:05.234878+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 463 handle_osd_map epochs [464,464], i have 464, src has [1,464]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189374464 unmapped: 28196864 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3266838 data_alloc: 251658240 data_used: 28388075
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 464 ms_handle_reset con 0x55e266889000 session 0x55e267f86fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:06.235027+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189374464 unmapped: 28196864 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 464 ms_handle_reset con 0x55e266833400 session 0x55e268143880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 464 ms_handle_reset con 0x55e266884000 session 0x55e2683b6700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c41c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:07.235232+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189382656 unmapped: 28188672 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.000078201s of 10.167777061s, submitted: 40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 ms_handle_reset con 0x55e268014000 session 0x55e265a04540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 ms_handle_reset con 0x55e267c41c00 session 0x55e267a71180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:08.235361+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f4957000/0x0/0x4ffc00000, data 0x4f7b5c5/0x51f5000, compress 0x0/0x0/0x0, omap 0x65968, meta 0x604a698), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189415424 unmapped: 28155904 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 ms_handle_reset con 0x55e266833400 session 0x55e2667dd340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:09.235491+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189431808 unmapped: 28139520 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 heartbeat osd_stat(store_statfs(0x4f4955000/0x0/0x4ffc00000, data 0x4f7d16f/0x51f7000, compress 0x0/0x0/0x0, omap 0x65f41, meta 0x604a0bf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:10.235634+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3267164 data_alloc: 251658240 data_used: 28388688
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:11.235785+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:12.235963+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 465 handle_osd_map epochs [465,466], i have 465, src has [1,466]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 28508160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 466 ms_handle_reset con 0x55e266884000 session 0x55e2681b2e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:13.236077+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189063168 unmapped: 28508160 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:14.236252+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 466 handle_osd_map epochs [466,467], i have 466, src has [1,467]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 467 ms_handle_reset con 0x55e266889000 session 0x55e2681b2a80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:15.236423+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 467 heartbeat osd_stat(store_statfs(0x4f494f000/0x0/0x4ffc00000, data 0x4f807c2/0x51fd000, compress 0x0/0x0/0x0, omap 0x66999, meta 0x6049667), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189079552 unmapped: 28491776 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3275931 data_alloc: 251658240 data_used: 28389273
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:16.236631+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189112320 unmapped: 28459008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 467 ms_handle_reset con 0x55e268014000 session 0x55e26674b180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e2400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 467 ms_handle_reset con 0x55e2685e2400 session 0x55e2680ed880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:17.236775+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 28672000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 ms_handle_reset con 0x55e266833400 session 0x55e26818afc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.005525589s of 10.005865097s, submitted: 134
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:18.236923+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 ms_handle_reset con 0x55e266884000 session 0x55e265b38000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 28672000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:19.237068+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 28672000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 ms_handle_reset con 0x55e266889000 session 0x55e2667dcfc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:20.237187+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f4949000/0x0/0x4ffc00000, data 0x4f82251/0x5201000, compress 0x0/0x0/0x0, omap 0x66ff0, meta 0x6049010), peers [0,1] op hist [0,0,0,0,0,0,0,0,2])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 ms_handle_reset con 0x55e268014000 session 0x55e2683b7500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f494c000/0x0/0x4ffc00000, data 0x4f821ef/0x5200000, compress 0x0/0x0/0x0, omap 0x66ed0, meta 0x6049130), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3278420 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:21.237308+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:22.237434+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f494c000/0x0/0x4ffc00000, data 0x4f821ef/0x5200000, compress 0x0/0x0/0x0, omap 0x66ed0, meta 0x6049130), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:23.237564+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:24.237723+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:25.237858+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3278420 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:26.238001+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 heartbeat osd_stat(store_statfs(0x4f494c000/0x0/0x4ffc00000, data 0x4f821ef/0x5200000, compress 0x0/0x0/0x0, omap 0x66ed0, meta 0x6049130), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:27.238096+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 ms_handle_reset con 0x55e2685d4c00 session 0x55e267f2a700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:28.238216+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.184859276s of 10.976266861s, submitted: 69
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:29.238376+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 469 ms_handle_reset con 0x55e266833400 session 0x55e26818b6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 469 heartbeat osd_stat(store_statfs(0x4f4945000/0x0/0x4ffc00000, data 0x4f83e4f/0x5205000, compress 0x0/0x0/0x0, omap 0x6701d, meta 0x6048fe3), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:30.238543+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3285375 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:31.238676+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:32.238786+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188563456 unmapped: 29007872 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 469 ms_handle_reset con 0x55e266884000 session 0x55e2683befc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 469 handle_osd_map epochs [469,470], i have 469, src has [1,470]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 470 ms_handle_reset con 0x55e266889000 session 0x55e2663961c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:33.238942+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 470 ms_handle_reset con 0x55e268014000 session 0x55e26818a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188604416 unmapped: 28966912 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:34.239096+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188604416 unmapped: 28966912 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 470 ms_handle_reset con 0x55e2685d4c00 session 0x55e26674aa80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 470 heartbeat osd_stat(store_statfs(0x4f4944000/0x0/0x4ffc00000, data 0x4f8597b/0x5206000, compress 0x0/0x0/0x0, omap 0x6716a, meta 0x6048e96), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:35.239234+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188604416 unmapped: 28966912 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3291627 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:36.239415+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188612608 unmapped: 28958720 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 470 handle_osd_map epochs [470,471], i have 470, src has [1,471]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 471 ms_handle_reset con 0x55e266833400 session 0x55e26818b880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 471 handle_osd_map epochs [471,472], i have 471, src has [1,472]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:37.239569+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188637184 unmapped: 28934144 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 472 ms_handle_reset con 0x55e266884000 session 0x55e2697a6000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:38.239768+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 28917760 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:39.239897+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188653568 unmapped: 28917760 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.307992935s of 10.417644501s, submitted: 60
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f4935000/0x0/0x4ffc00000, data 0x4f8ac14/0x5211000, compress 0x0/0x0/0x0, omap 0x67b77, meta 0x6048489), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:40.240124+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 28901376 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 ms_handle_reset con 0x55e266889000 session 0x55e266396380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3299955 data_alloc: 251658240 data_used: 28389289
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:41.240271+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 28901376 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:42.240459+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188669952 unmapped: 28901376 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:43.240639+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f4939000/0x0/0x4ffc00000, data 0x4f8ac14/0x5211000, compress 0x0/0x0/0x0, omap 0x67b77, meta 0x6048489), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 ms_handle_reset con 0x55e268014000 session 0x55e268143dc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267da8400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188678144 unmapped: 28893184 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:44.240833+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 28884992 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:45.240998+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188686336 unmapped: 28884992 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:46.241224+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3297151 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 ms_handle_reset con 0x55e267da8400 session 0x55e26814d6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188694528 unmapped: 28876800 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f493d000/0x0/0x4ffc00000, data 0x4f8aba2/0x520f000, compress 0x0/0x0/0x0, omap 0x67b77, meta 0x6048489), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:47.241323+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188694528 unmapped: 28876800 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:48.241432+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188694528 unmapped: 28876800 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:49.241658+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188694528 unmapped: 28876800 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 heartbeat osd_stat(store_statfs(0x4f493d000/0x0/0x4ffc00000, data 0x4f8aba2/0x520f000, compress 0x0/0x0/0x0, omap 0x67b77, meta 0x6048489), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 473 handle_osd_map epochs [474,474], i have 474, src has [1,474]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 6.302210331s of 10.134449959s, submitted: 39
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:50.241809+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 28844032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 474 heartbeat osd_stat(store_statfs(0x4f4938000/0x0/0x4ffc00000, data 0x4f8c621/0x5212000, compress 0x0/0x0/0x0, omap 0x6807b, meta 0x6047f85), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:51.241957+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3300645 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 474 ms_handle_reset con 0x55e266833400 session 0x55e267f688c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 28844032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:52.242116+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188727296 unmapped: 28844032 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:53.242254+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 474 handle_osd_map epochs [474,475], i have 474, src has [1,475]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188768256 unmapped: 28803072 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:54.242387+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188768256 unmapped: 28803072 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 475 heartbeat osd_stat(store_statfs(0x4f4935000/0x0/0x4ffc00000, data 0x4f8e1bd/0x5215000, compress 0x0/0x0/0x0, omap 0x681ca, meta 0x6047e36), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 475 ms_handle_reset con 0x55e266884000 session 0x55e267d4a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:55.242577+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 475 handle_osd_map epochs [475,476], i have 475, src has [1,476]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 28786688 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:56.242812+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3306193 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 28786688 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:57.243012+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 28786688 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:58.243210+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 28786688 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:59.243354+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188784640 unmapped: 28786688 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 476 heartbeat osd_stat(store_statfs(0x4f4932000/0x0/0x4ffc00000, data 0x4f8fdad/0x5218000, compress 0x0/0x0/0x0, omap 0x68319, meta 0x6047ce7), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.476723671s of 10.093423843s, submitted: 39
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 476 ms_handle_reset con 0x55e266889000 session 0x55e26573d500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:00.243497+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188809216 unmapped: 28762112 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:01.243615+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3308861 data_alloc: 251658240 data_used: 28389175
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 476 handle_osd_map epochs [476,477], i have 476, src has [1,477]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188841984 unmapped: 28729344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 477 ms_handle_reset con 0x55e268014000 session 0x55e2683b6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:02.243774+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188850176 unmapped: 28721152 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 477 ms_handle_reset con 0x55e2657a3000 session 0x55e2683b7880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 477 heartbeat osd_stat(store_statfs(0x4f492d000/0x0/0x4ffc00000, data 0x4f919d7/0x521d000, compress 0x0/0x0/0x0, omap 0x688cd, meta 0x6047733), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 478 ms_handle_reset con 0x55e2657a3000 session 0x55e2680ed6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:03.243918+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188858368 unmapped: 28712960 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:04.244075+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 478 ms_handle_reset con 0x55e266833400 session 0x55e267fdf880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 478 ms_handle_reset con 0x55e266884000 session 0x55e267f87c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 28688384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:05.244256+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 28688384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 478 ms_handle_reset con 0x55e266889000 session 0x55e2680edc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:06.244461+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3316334 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188882944 unmapped: 28688384 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:07.244637+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e268014000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 478 handle_osd_map epochs [478,479], i have 478, src has [1,479]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 479 handle_osd_map epochs [479,479], i have 479, src has [1,479]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188899328 unmapped: 28672000 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 479 ms_handle_reset con 0x55e268014000 session 0x55e2674d6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 479 heartbeat osd_stat(store_statfs(0x4f492c000/0x0/0x4ffc00000, data 0x4f9358d/0x521e000, compress 0x0/0x0/0x0, omap 0x69491, meta 0x6046b6f), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:08.244762+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188907520 unmapped: 28663808 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 480 ms_handle_reset con 0x55e2657a3000 session 0x55e267f2a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:09.244920+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188940288 unmapped: 28631040 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 ms_handle_reset con 0x55e266833400 session 0x55e2697a6000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.802045822s of 10.118611336s, submitted: 97
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:10.245129+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188948480 unmapped: 28622848 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:11.245296+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3325143 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188948480 unmapped: 28622848 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 ms_handle_reset con 0x55e266884000 session 0x55e2663961c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:12.245438+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 ms_handle_reset con 0x55e266889000 session 0x55e2681b2a80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188989440 unmapped: 28581888 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:13.245567+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188989440 unmapped: 28581888 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f4925000/0x0/0x4ffc00000, data 0x4f987b4/0x5227000, compress 0x0/0x0/0x0, omap 0x6a1ff, meta 0x6045e01), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 heartbeat osd_stat(store_statfs(0x4f4925000/0x0/0x4ffc00000, data 0x4f987b4/0x5227000, compress 0x0/0x0/0x0, omap 0x6a1ff, meta 0x6045e01), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:14.245729+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 28573696 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 ms_handle_reset con 0x55e2685e3000 session 0x55e2681b3180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:15.245941+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 28573696 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:16.246140+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3323363 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 28573696 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:17.246288+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 188997632 unmapped: 28573696 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:18.246565+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189014016 unmapped: 28557312 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:19.246715+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 ms_handle_reset con 0x55e2657a3000 session 0x55e268100700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f491e000/0x0/0x4ffc00000, data 0x4f9a2a5/0x522c000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:20.246888+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f491e000/0x0/0x4ffc00000, data 0x4f9a2a5/0x522c000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 ms_handle_reset con 0x55e266833400 session 0x55e2663968c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.938449860s of 11.072129250s, submitted: 34
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:21.247017+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328830 data_alloc: 251658240 data_used: 28389858
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 ms_handle_reset con 0x55e266884000 session 0x55e265b38000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:22.247162+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:23.247291+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:24.247454+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:25.247584+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:26.247775+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:27.247904+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189022208 unmapped: 28549120 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:28.248020+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189030400 unmapped: 28540928 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:29.248187+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189030400 unmapped: 28540928 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:30.249315+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189030400 unmapped: 28540928 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:31.249462+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189030400 unmapped: 28540928 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:32.249623+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189030400 unmapped: 28540928 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:33.249741+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:34.249891+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:35.250032+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:36.250287+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:37.250425+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:38.250581+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189038592 unmapped: 28532736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:39.250723+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:40.250894+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:41.251083+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:42.251242+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:43.251365+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:44.251505+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:45.251632+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:46.251803+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:47.251925+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:48.252119+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:49.252263+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:50.252403+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:51.252573+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189054976 unmapped: 28516352 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:52.252721+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:53.252868+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:54.252988+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4922000/0x0/0x4ffc00000, data 0x4f9a233/0x522a000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:55.253168+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:56.253382+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328090 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189071360 unmapped: 28499968 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.934787750s of 35.954994202s, submitted: 8
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 ms_handle_reset con 0x55e266889000 session 0x55e26817a1c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:57.253503+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189087744 unmapped: 28483584 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:58.253644+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 heartbeat osd_stat(store_statfs(0x4f4921000/0x0/0x4ffc00000, data 0x4f9a243/0x522b000, compress 0x0/0x0/0x0, omap 0x6a341, meta 0x6045cbf), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d7c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189087744 unmapped: 28483584 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:59.253806+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 482 handle_osd_map epochs [482,483], i have 482, src has [1,483]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e2685d7c00 session 0x55e267f69180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189112320 unmapped: 28459008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f491c000/0x0/0x4ffc00000, data 0x4f9bddf/0x522e000, compress 0x0/0x0/0x0, omap 0x6a483, meta 0x6045b7d), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:00.253963+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e2657a3000 session 0x55e26814d6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189112320 unmapped: 28459008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:01.254082+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f491c000/0x0/0x4ffc00000, data 0x4f9bddf/0x522e000, compress 0x0/0x0/0x0, omap 0x6a483, meta 0x6045b7d), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e266833400 session 0x55e2667dc540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e266884000 session 0x55e2683bf500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3333336 data_alloc: 251658240 data_used: 28389760
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189112320 unmapped: 28459008 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:02.254238+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f491c000/0x0/0x4ffc00000, data 0x4f9bddf/0x522e000, compress 0x0/0x0/0x0, omap 0x6a483, meta 0x6045b7d), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266885000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 189136896 unmapped: 28434432 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 heartbeat osd_stat(store_statfs(0x4f491c000/0x0/0x4ffc00000, data 0x4f9bddf/0x522e000, compress 0x0/0x0/0x0, omap 0x6a483, meta 0x6045b7d), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e266889000 session 0x55e26814d340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e266885000 session 0x55e26817bc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e266833400 session 0x55e26573cc40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:03.254392+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 ms_handle_reset con 0x55e2657a3000 session 0x55e267f86fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 484 ms_handle_reset con 0x55e266884000 session 0x55e267f2aa80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 27402240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:04.254518+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 27402240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:05.254629+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 27402240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:06.254805+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3330994 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 27402240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 484 heartbeat osd_stat(store_statfs(0x4f491a000/0x0/0x4ffc00000, data 0x4f9d9bf/0x5230000, compress 0x0/0x0/0x0, omap 0x6a9bf, meta 0x6045641), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:07.254947+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190169088 unmapped: 27402240 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:08.255030+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.501629829s of 11.527578354s, submitted: 13
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 27385856 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:09.255189+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 27385856 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:10.255314+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 27385856 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:11.255482+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3337930 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190185472 unmapped: 27385856 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:12.255612+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266889000 session 0x55e267f68fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f48d8000/0x0/0x4ffc00000, data 0x4fdf44e/0x5274000, compress 0x0/0x0/0x0, omap 0x6ab02, meta 0x60454fe), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:13.255730+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:14.255862+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:15.256006+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f48d8000/0x0/0x4ffc00000, data 0x4fdf44e/0x5274000, compress 0x0/0x0/0x0, omap 0x6ab02, meta 0x60454fe), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:16.256235+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3337930 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:17.256414+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:18.256575+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:19.256705+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:20.256859+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f48d8000/0x0/0x4ffc00000, data 0x4fdf44e/0x5274000, compress 0x0/0x0/0x0, omap 0x6ab02, meta 0x60454fe), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f48d8000/0x0/0x4ffc00000, data 0x4fdf44e/0x5274000, compress 0x0/0x0/0x0, omap 0x6ab02, meta 0x60454fe), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:21.257027+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3337930 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f48d8000/0x0/0x4ffc00000, data 0x4fdf44e/0x5274000, compress 0x0/0x0/0x0, omap 0x6ab02, meta 0x60454fe), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:22.257255+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:23.257438+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190029824 unmapped: 27541504 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685df000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2685df000 session 0x55e26818a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.667311668s of 15.680155754s, submitted: 14
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e2681b2e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:24.257564+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190062592 unmapped: 27508736 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:25.257711+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198467584 unmapped: 19103744 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:26.257942+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3468989 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e2659ada40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e267d4bc00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3917000/0x0/0x4ffc00000, data 0x5f9f478/0x6235000, compress 0x0/0x0/0x0, omap 0x6ad9c, meta 0x6045264), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:27.258140+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:28.258282+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:29.258453+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:30.258609+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:31.258739+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467622 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:32.258871+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:33.259100+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:34.259238+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:35.259367+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:36.259960+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467622 data_alloc: 251658240 data_used: 28943232
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:37.260019+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266889000 session 0x55e267f68700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:38.260147+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3117000/0x0/0x4ffc00000, data 0x679f4b0/0x6a35000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190267392 unmapped: 27303936 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e2683b68c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:39.260313+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e2683befc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.500488281s of 15.616862297s, submitted: 54
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e266396380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190414848 unmapped: 27156480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:40.260470+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190414848 unmapped: 27156480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [0,0,0,0,1])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:41.260701+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3472151 data_alloc: 251658240 data_used: 28943248
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190414848 unmapped: 27156480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:42.260838+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190414848 unmapped: 27156480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:43.260944+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190414848 unmapped: 27156480 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:44.261078+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:45.261187+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:46.261338+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3499799 data_alloc: 251658240 data_used: 32639376
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:47.261491+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:48.261660+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:49.261777+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:50.261960+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:51.262164+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3499799 data_alloc: 251658240 data_used: 32639376
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:52.262346+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:53.262469+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:54.262602+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.118329048s of 15.224567413s, submitted: 6
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 190636032 unmapped: 26935296 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:55.262723+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 200908800 unmapped: 16662528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:56.262869+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2dd2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3522603 data_alloc: 251658240 data_used: 32614800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 200908800 unmapped: 16662528 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:57.263080+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198057984 unmapped: 19513344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:58.263304+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198057984 unmapped: 19513344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:59.263520+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198057984 unmapped: 19513344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:00.263681+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198057984 unmapped: 19513344 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f30f2000/0x0/0x4ffc00000, data 0x67c34d3/0x6a5a000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:01.263798+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3506139 data_alloc: 251658240 data_used: 32618896
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198615040 unmapped: 18956288 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:02.263911+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198656000 unmapped: 18915328 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:03.264102+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2a74000/0x0/0x4ffc00000, data 0x6e414d3/0x70d8000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x604521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198688768 unmapped: 18882560 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:04.264223+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.373902798s of 10.025824547s, submitted: 83
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198721536 unmapped: 18849792 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:05.264347+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198737920 unmapped: 18833408 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:06.264552+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e2683bea80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e2659ace00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3577427 data_alloc: 251658240 data_used: 32622992
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266889000 session 0x55e2659ac700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:07.264756+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:08.264915+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:09.265080+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x71e521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:10.265300+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:11.265462+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3569305 data_alloc: 251658240 data_used: 32514448
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:12.265623+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:13.265754+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:14.265903+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:15.266090+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x71e521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:16.266255+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3569305 data_alloc: 251658240 data_used: 32514448
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:17.266386+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:18.266528+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:19.266650+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x71e521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:20.266790+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:21.267008+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3569305 data_alloc: 251658240 data_used: 32514448
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:22.267172+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:23.267339+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x71e521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:24.267515+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:25.267748+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:26.267911+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3569305 data_alloc: 251658240 data_used: 32514448
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:27.268241+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:28.268449+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:29.268691+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.182712555s of 24.362014771s, submitted: 26
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e267f87880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ade6, meta 0x71e521a), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:30.268862+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:31.269079+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3571126 data_alloc: 251658240 data_used: 32518509
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:32.269198+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:33.269310+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:34.269428+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:35.269584+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae30, meta 0x71e51d0), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:36.269781+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3571126 data_alloc: 251658240 data_used: 32518509
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae30, meta 0x71e51d0), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:37.269921+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae30, meta 0x71e51d0), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:38.270060+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:39.270164+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:40.270264+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:41.270398+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3571126 data_alloc: 251658240 data_used: 32518509
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:42.270526+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:43.270664+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae30, meta 0x71e51d0), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.047048569s of 14.056503296s, submitted: 6
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:44.270827+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:45.271725+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:46.271977+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3576106 data_alloc: 251658240 data_used: 32829805
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:47.272399+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:48.272586+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:49.272794+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:50.272974+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:51.273146+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3577114 data_alloc: 251658240 data_used: 32829805
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:52.273296+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:53.273445+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:54.273622+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:55.273807+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:56.274143+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3577114 data_alloc: 251658240 data_used: 32829805
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:57.274338+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:58.274527+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.444774628s of 15.457074165s, submitted: 9
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:59.274757+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:00.274945+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:01.275160+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3576410 data_alloc: 251658240 data_used: 32825709
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:02.275352+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:03.275535+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:04.275717+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:05.275900+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4d3/0x76c2000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:06.276115+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 20201472 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3576410 data_alloc: 251658240 data_used: 32825709
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:07.276329+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197468160 unmapped: 20103168 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e26814da40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e267d4aa80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e267a70380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:08.276477+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197476352 unmapped: 20094976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:09.276632+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197476352 unmapped: 20094976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:10.276823+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197476352 unmapped: 20094976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:11.277002+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197476352 unmapped: 20094976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f12ea000/0x0/0x4ffc00000, data 0x742b4b0/0x76c1000, compress 0x0/0x0/0x0, omap 0x6ae7a, meta 0x71e5186), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3580254 data_alloc: 251658240 data_used: 33988973
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:12.277142+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197476352 unmapped: 20094976 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266889000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.087955475s of 14.119877815s, submitted: 15
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266889000 session 0x55e2680ed6c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:13.277256+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e267fdee00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:14.277509+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:15.277711+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:16.277929+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3353266 data_alloc: 251658240 data_used: 27444026
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:17.278116+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:18.278280+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:19.278430+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets getting new tickets!
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:20.278738+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _finish_auth 0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:20.279760+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:21.278877+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 196034560 unmapped: 21536768 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e267fdfdc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e267d4b340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352242 data_alloc: 251658240 data_used: 28951354
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:22.279080+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:23.279313+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:24.279502+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:25.279668+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:26.279880+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352242 data_alloc: 251658240 data_used: 28951354
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:27.280083+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:28.280255+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197017600 unmapped: 20553728 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: mgrc ms_handle_reset ms_handle_reset con 0x55e267b9a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3514601685
Dec 13 04:35:58 compute-0 ceph-osd[87731]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3514601685,v1:192.168.122.100:6801/3514601685]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: get_auth_request con 0x55e2685df000 auth_method 0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:29.280400+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:30.280546+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:31.280688+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352242 data_alloc: 251658240 data_used: 28951354
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:32.280863+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:33.280994+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:34.281175+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:35.281384+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:36.281647+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352242 data_alloc: 251658240 data_used: 28951354
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:37.281841+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:38.281946+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:39.282114+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:40.282308+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:41.282477+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3352242 data_alloc: 251658240 data_used: 28951354
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f356a000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:42.282718+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:43.282913+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:44.283102+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197165056 unmapped: 20406272 heap: 217571328 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.879573822s of 31.962547302s, submitted: 38
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:45.283259+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e2683b6fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:46.283531+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3459438 data_alloc: 251658240 data_used: 28955352
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:47.283695+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2379000/0x0/0x4ffc00000, data 0x639f43e/0x6633000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:48.284161+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2379000/0x0/0x4ffc00000, data 0x639f43e/0x6633000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2379000/0x0/0x4ffc00000, data 0x639f43e/0x6633000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:49.284269+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:50.284415+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:51.284563+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2379000/0x0/0x4ffc00000, data 0x639f43e/0x6633000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3459438 data_alloc: 251658240 data_used: 28955352
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:52.284723+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2379000/0x0/0x4ffc00000, data 0x639f43e/0x6633000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:53.284848+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:54.285014+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d6c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2685d6c00 session 0x55e267f2a540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:55.285152+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 28803072 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e267fde540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:56.285301+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e26818a540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.273877144s of 11.427966118s, submitted: 5
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e265b38000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:57.285441+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3462757 data_alloc: 251658240 data_used: 28955352
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:58.285653+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:59.285841+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:00.285957+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e0400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:01.286103+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:02.286257+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3469289 data_alloc: 251658240 data_used: 30052056
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:03.286415+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:04.286573+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:05.286997+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:06.287342+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:07.287662+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3469289 data_alloc: 251658240 data_used: 30052056
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:08.287975+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:09.288308+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:10.288504+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197181440 unmapped: 28794880 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.471401215s of 14.486171722s, submitted: 7
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2378000/0x0/0x4ffc00000, data 0x639f44e/0x6634000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:11.288649+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198434816 unmapped: 27541504 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2165000/0x0/0x4ffc00000, data 0x64a244e/0x6737000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [0,0,0,0,1])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:12.288838+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3481461 data_alloc: 251658240 data_used: 30003928
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f2165000/0x0/0x4ffc00000, data 0x64a244e/0x6737000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [0,0,0,1])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199499776 unmapped: 26476544 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:13.289119+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198451200 unmapped: 27525120 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f17ea000/0x0/0x4ffc00000, data 0x6e1d44e/0x70b2000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:14.289269+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198451200 unmapped: 27525120 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f17ea000/0x0/0x4ffc00000, data 0x6e1d44e/0x70b2000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:15.289458+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198451200 unmapped: 27525120 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:16.289745+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:17.289984+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3527761 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:18.290148+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:19.290322+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:20.290508+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fa000/0x0/0x4ffc00000, data 0x6e1d44e/0x70b2000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:21.290712+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:22.290911+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3527761 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.409922600s of 11.612756729s, submitted: 49
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e2680ec380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2685e0400 session 0x55e267f69180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197844992 unmapped: 28131328 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e267f2b500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:23.291122+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:24.291317+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:25.291513+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:26.291684+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:27.291874+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526408 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:28.292020+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:29.293301+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:30.293569+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:31.293742+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:32.293912+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526408 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:33.294115+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:34.294384+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:35.294584+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:36.294840+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:37.295161+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3526408 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e2697a7a40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:38.295346+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:39.295503+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e26817a700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197853184 unmapped: 28123136 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e26814d340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:40.295622+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267c41800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.812271118s of 17.836629868s, submitted: 16
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e267c41800 session 0x55e267d4b180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:41.295794+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:42.295918+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529072 data_alloc: 251658240 data_used: 29938850
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:43.296103+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:44.296229+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:45.296357+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:46.296526+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:47.296697+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529456 data_alloc: 251658240 data_used: 29989026
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:48.296832+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:49.297068+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:50.297214+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:51.297373+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:52.297522+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529456 data_alloc: 251658240 data_used: 29989026
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:53.297646+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198156288 unmapped: 27820032 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:54.297775+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:55.297918+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:56.298091+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:57.298227+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538544 data_alloc: 251658240 data_used: 30423202
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:58.298368+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:59.298533+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:00.298705+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:01.298887+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:02.299137+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538544 data_alloc: 251658240 data_used: 30423202
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:03.299309+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:04.299482+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:05.299652+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:06.299899+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:07.300067+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538544 data_alloc: 251658240 data_used: 30423202
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:08.300209+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:09.300695+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:10.301457+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:11.301898+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:12.302132+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538544 data_alloc: 251658240 data_used: 30423202
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:13.302316+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:14.302476+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:15.302747+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:16.303108+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:17.303338+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3538544 data_alloc: 251658240 data_used: 30423202
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:18.303617+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:19.303859+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18d7000/0x0/0x4ffc00000, data 0x6e4143e/0x70d5000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:20.303989+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:21.304210+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e267f86540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e26573c1c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198221824 unmapped: 27754496 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 41.864604950s of 41.870384216s, submitted: 1
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e267f688c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:22.304332+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3531480 data_alloc: 251658240 data_used: 30425250
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 27721728 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:23.304576+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198254592 unmapped: 27721728 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:24.304726+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 27705344 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:25.304866+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 27705344 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:26.305092+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 27705344 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:27.305275+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3531480 data_alloc: 251658240 data_used: 30425250
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 27705344 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:28.305412+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 198270976 unmapped: 27705344 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:29.305555+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f18fb000/0x0/0x4ffc00000, data 0x6e1d43e/0x70b1000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e268100e00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:30.305737+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:31.306149+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:32.306385+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:33.306851+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:34.307000+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:35.307162+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:36.307372+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:37.307525+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:38.307700+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:39.307906+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:40.308112+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:41.308319+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:42.308451+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:43.308633+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:44.308832+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:45.309034+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:46.309345+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:47.309574+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:48.309815+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:49.309964+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:50.310140+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:51.310257+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:52.310397+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:53.310614+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:54.310761+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:55.310934+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:56.311142+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:57.311270+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:58.311411+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197304320 unmapped: 28672000 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:59.311683+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:00.311809+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:01.311953+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:02.312163+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:03.312316+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:04.312505+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:05.312666+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:06.312850+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:07.313011+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:08.313158+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:09.313298+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:10.313518+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:11.313756+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:12.313941+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:13.314103+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:14.314323+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:15.314559+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:16.314779+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:17.314996+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3365396 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:18.315179+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197320704 unmapped: 28655616 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f3779000/0x0/0x4ffc00000, data 0x4f9f43e/0x5233000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:19.315340+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197337088 unmapped: 28639232 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:20.315549+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197337088 unmapped: 28639232 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:21.315845+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197337088 unmapped: 28639232 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:22.315991+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197337088 unmapped: 28639232 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687be400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 60.179698944s of 60.195209503s, submitted: 10
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2687be400 session 0x55e267d4afc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3000 session 0x55e2667dc540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2657a3800 session 0x55e2667dc000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e267d4ba40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266884000 session 0x55e26814c8c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402491 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:23.316142+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:24.316329+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:25.316501+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:26.316711+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:27.316908+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2687be400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e2687be400 session 0x55e2697a7180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402491 data_alloc: 251658240 data_used: 28890274
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:28.317106+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2657a3800
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:29.317269+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 197369856 unmapped: 28606464 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:30.317456+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:31.317678+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:32.324550+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3439999 data_alloc: 251658240 data_used: 35222690
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:33.324711+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:34.324830+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:35.324986+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199630848 unmapped: 26345472 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:36.325203+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199647232 unmapped: 26329088 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:37.325394+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199647232 unmapped: 26329088 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f316f000/0x0/0x4ffc00000, data 0x55a943e/0x583d000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3439999 data_alloc: 251658240 data_used: 35222690
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:38.325570+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199647232 unmapped: 26329088 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:39.325687+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 199663616 unmapped: 26312704 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.528011322s of 17.658544540s, submitted: 14
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:40.325805+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205643776 unmapped: 20332544 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:41.325945+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:42.326090+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f27b0000/0x0/0x4ffc00000, data 0x5f6843e/0x61fc000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501611 data_alloc: 251658240 data_used: 35968162
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:43.326243+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:44.326426+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:45.326578+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f27b0000/0x0/0x4ffc00000, data 0x5f6843e/0x61fc000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:46.326740+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:47.326866+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3501187 data_alloc: 251658240 data_used: 35968162
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:48.327059+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f27ad000/0x0/0x4ffc00000, data 0x5f6b43e/0x61ff000, compress 0x0/0x0/0x0, omap 0x6b114, meta 0x71e4eec), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 ms_handle_reset con 0x55e266833400 session 0x55e26b7601c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:49.327200+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:50.327341+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:51.327479+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f27ac000/0x0/0x4ffc00000, data 0x5f6b4a0/0x6200000, compress 0x0/0x0/0x0, omap 0x6b467, meta 0x71e4b99), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266884000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.946084023s of 12.241444588s, submitted: 68
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:52.327584+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3504559 data_alloc: 251658240 data_used: 35968162
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:53.327726+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 20152320 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 heartbeat osd_stat(store_statfs(0x4f27ac000/0x0/0x4ffc00000, data 0x5f6b4a0/0x6200000, compress 0x0/0x0/0x0, omap 0x6b467, meta 0x71e4b99), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:54.327857+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205848576 unmapped: 20127744 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:55.327989+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205848576 unmapped: 20127744 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 485 handle_osd_map epochs [485,486], i have 485, src has [1,486]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 486 ms_handle_reset con 0x55e2685dc400 session 0x55e268100fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:56.328111+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 20119552 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:57.328252+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 20119552 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267daac00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 487 ms_handle_reset con 0x55e267daac00 session 0x55e267f87880
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3520677 data_alloc: 251658240 data_used: 36017314
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:58.328456+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f279e000/0x0/0x4ffc00000, data 0x5f7509e/0x620c000, compress 0x0/0x0/0x0, omap 0x6b563, meta 0x71e4a9d), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205864960 unmapped: 20111360 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:59.328582+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205864960 unmapped: 20111360 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:00.328719+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206012416 unmapped: 19963904 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 487 heartbeat osd_stat(store_statfs(0x4f277f000/0x0/0x4ffc00000, data 0x5f9fc9c/0x622d000, compress 0x0/0x0/0x0, omap 0x6c048, meta 0x71e3fb8), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 487 handle_osd_map epochs [487,488], i have 487, src has [1,488]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685d4000 session 0x55e2667dcfc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:01.328904+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206020608 unmapped: 19955712 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:02.329067+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.304305077s of 10.103812218s, submitted: 50
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266884000 session 0x55e2681b3dc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f277a000/0x0/0x4ffc00000, data 0x5fa1838/0x6230000, compress 0x0/0x0/0x0, omap 0x6c548, meta 0x71e3ab8), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206020608 unmapped: 19955712 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3525883 data_alloc: 251658240 data_used: 36017314
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:03.329335+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206020608 unmapped: 19955712 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267daac00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e267daac00 session 0x55e2680ed500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266833400 session 0x55e2674d6540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:04.329461+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206184448 unmapped: 19791872 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:05.329606+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206200832 unmapped: 19775488 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f277a000/0x0/0x4ffc00000, data 0x5fa289a/0x6232000, compress 0x0/0x0/0x0, omap 0x6c5cd, meta 0x71e3a33), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:06.329829+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206200832 unmapped: 19775488 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:07.329958+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 19693568 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f277a000/0x0/0x4ffc00000, data 0x5fa289a/0x6232000, compress 0x0/0x0/0x0, omap 0x6c5cd, meta 0x71e3a33), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3529423 data_alloc: 251658240 data_used: 36017412
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:08.330250+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206307328 unmapped: 19668992 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:09.330362+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206307328 unmapped: 19668992 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:10.330512+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685d4000 session 0x55e267a70380
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206315520 unmapped: 19660800 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:11.330648+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206315520 unmapped: 19660800 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685dc400 session 0x55e267d4aa80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:12.330770+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f2779000/0x0/0x4ffc00000, data 0x5fa389a/0x6233000, compress 0x0/0x0/0x0, omap 0x6c5cd, meta 0x71e3a33), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 19537920 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x5fa389a/0x6233000, compress 0x0/0x0/0x0, omap 0x6c435, meta 0x71e3bcb), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3530009 data_alloc: 251658240 data_used: 36025702
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:13.330903+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 19537920 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:14.331014+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 19537920 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:15.331158+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 19537920 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:16.331378+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206438400 unmapped: 19537920 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.773790359s of 14.188995361s, submitted: 23
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x5fa389a/0x6233000, compress 0x0/0x0/0x0, omap 0x6c435, meta 0x71e3bcb), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:17.331550+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205979648 unmapped: 19996672 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3536471 data_alloc: 251658240 data_used: 36033894
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266799c00 session 0x55e26573d500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:18.331681+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205979648 unmapped: 19996672 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:19.331791+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 205979648 unmapped: 19996672 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266833400 session 0x55e26573d340
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266799c00 session 0x55e26817b500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:20.331922+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206127104 unmapped: 19849216 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f26ef000/0x0/0x4ffc00000, data 0x602c8aa/0x62bd000, compress 0x0/0x0/0x0, omap 0x6c6b1, meta 0x71e394f), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:21.332093+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206127104 unmapped: 19849216 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:22.332287+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206127104 unmapped: 19849216 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3537959 data_alloc: 251658240 data_used: 36033894
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:23.332416+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e267daac00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 19783680 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f26ef000/0x0/0x4ffc00000, data 0x602c8aa/0x62bd000, compress 0x0/0x0/0x0, omap 0x6c6b1, meta 0x71e394f), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:24.332539+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206192640 unmapped: 19783680 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:25.332659+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206372864 unmapped: 19603456 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f26ef000/0x0/0x4ffc00000, data 0x602c8aa/0x62bd000, compress 0x0/0x0/0x0, omap 0x6c6b1, meta 0x71e394f), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:26.332870+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e267daac00 session 0x55e2697a61c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 18546688 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:27.332997+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.641813278s of 10.755173683s, submitted: 21
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207429632 unmapped: 18546688 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685d4000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685d4000 session 0x55e26818ac40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685dc400 session 0x55e26818a000
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3539195 data_alloc: 251658240 data_used: 36140390
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:28.333161+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 19087360 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:29.333311+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685dc400 session 0x55e267f2ba40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206888960 unmapped: 19087360 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266799c00 session 0x55e267f688c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:30.333450+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 206897152 unmapped: 19079168 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e266833400 session 0x55e267fde540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d8400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 heartbeat osd_stat(store_statfs(0x4f2776000/0x0/0x4ffc00000, data 0x5fa689a/0x6236000, compress 0x0/0x0/0x0, omap 0x6c78d, meta 0x71e3873), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2680d8400 session 0x55e267fdfdc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:31.333557+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207110144 unmapped: 18866176 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e1400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 ms_handle_reset con 0x55e2685e1400 session 0x55e26814d500
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685e1400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:32.333743+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207085568 unmapped: 18890752 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 489 ms_handle_reset con 0x55e2685e1400 session 0x55e2667dc540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x5fa6838/0x6235000, compress 0x0/0x0/0x0, omap 0x6d1bb, meta 0x71e2e45), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 489 heartbeat osd_stat(store_statfs(0x4f2777000/0x0/0x4ffc00000, data 0x5fa6838/0x6235000, compress 0x0/0x0/0x0, omap 0x6d1bb, meta 0x71e2e45), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3534827 data_alloc: 251658240 data_used: 36136196
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 489 ms_handle_reset con 0x55e266799c00 session 0x55e2659ad180
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:33.333870+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 18776064 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 490 ms_handle_reset con 0x55e266833400 session 0x55e267a701c0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 490 heartbeat osd_stat(store_statfs(0x4f276f000/0x0/0x4ffc00000, data 0x5fa3fb6/0x6239000, compress 0x0/0x0/0x0, omap 0x6da8b, meta 0x71e2575), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:34.333970+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2680d8400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 490 ms_handle_reset con 0x55e2680d8400 session 0x55e267f68700
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 18694144 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _renew_subs
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e2685dc400 session 0x55e267f86540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f276f000/0x0/0x4ffc00000, data 0x5fa3fb6/0x6239000, compress 0x0/0x0/0x0, omap 0x6da8b, meta 0x71e2575), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:35.334089+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207282176 unmapped: 18694144 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e2685dc400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e2685dc400 session 0x55e2683b6fc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266799c00
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e266799c00 session 0x55e2662f9a40
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:36.334254+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 18677760 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:37.334594+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.851144791s of 10.097025871s, submitted: 134
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e2657a3000 session 0x55e2667dca80
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e2657a3800 session 0x55e26674a540
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 207298560 unmapped: 18677760 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: handle_auth_request added challenge on 0x55e266833400
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 ms_handle_reset con 0x55e266833400 session 0x55e265b8afc0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3402469 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f3767000/0x0/0x4ffc00000, data 0x4fa9ae2/0x5245000, compress 0x0/0x0/0x0, omap 0x6e4b5, meta 0x71e1b4b), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:38.334753+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:39.334904+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:40.335100+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 heartbeat osd_stat(store_statfs(0x4f3767000/0x0/0x4ffc00000, data 0x4fa9ae2/0x5245000, compress 0x0/0x0/0x0, omap 0x6e4b5, meta 0x71e1b4b), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 491 handle_osd_map epochs [492,492], i have 492, src has [1,492]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:41.335277+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:42.335460+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3405963 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:43.335616+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 492 heartbeat osd_stat(store_statfs(0x4f3762000/0x0/0x4ffc00000, data 0x4fab57d/0x5248000, compress 0x0/0x0/0x0, omap 0x6ea30, meta 0x71e15d0), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202637312 unmapped: 23339008 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 492 handle_osd_map epochs [492,493], i have 492, src has [1,493]
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:44.335782+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:45.335918+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:46.336129+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:47.336256+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:48.336369+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:49.336545+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:50.336708+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:51.336851+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:52.337000+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:53.337150+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:54.337327+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:55.337478+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:56.337676+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:57.337788+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:58.337921+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:59.338078+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:00.338218+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:01.338356+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:02.338501+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:03.338648+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:04.338823+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:05.338993+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:06.339267+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:07.339456+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:08.339643+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:09.339845+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:10.340140+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:11.340294+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:12.340442+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:13.340612+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:14.340740+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:15.340892+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:16.341026+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:17.341200+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:18.341370+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:19.341535+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.6 total, 600.0 interval
                                           Cumulative writes: 23K writes, 92K keys, 23K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s
                                           Cumulative WAL: 23K writes, 8414 syncs, 2.78 writes per sync, written: 0.07 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2976 writes, 8358 keys, 2976 commit groups, 1.0 writes per commit group, ingest: 7.57 MB, 0.01 MB/s
                                           Interval WAL: 2976 writes, 1338 syncs, 2.22 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:20.341696+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:21.341913+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:22.342060+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:23.342188+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:24.342338+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:25.342466+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:26.342632+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:27.342818+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:28.342961+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:29.343092+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:30.343239+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:31.343356+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:32.343478+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:33.343594+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:34.343713+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:35.343863+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:36.344083+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:37.344281+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:38.344410+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:39.344563+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:40.344685+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:41.344878+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:42.345156+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:43.345330+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:44.345506+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:45.345646+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:46.345822+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:47.346002+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:48.346176+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:49.346412+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:50.346589+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:51.346754+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:52.346881+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:53.347029+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:54.347184+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:55.347306+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:56.347503+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:57.347640+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:58.347762+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:59.347922+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:00.348101+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:01.348230+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:02.348387+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:03.348575+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:04.348728+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:05.348876+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:06.349081+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:07.349231+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:08.349392+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:09.349576+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:10.349690+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:11.349876+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:12.350034+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:13.350178+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408737 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:14.350334+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f375f000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:15.350493+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 98.499229431s of 98.581321716s, submitted: 43
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:16.350683+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:17.350848+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 202686464 unmapped: 23289856 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:18.350983+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408089 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:19.351127+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:20.351250+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3761000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:21.351362+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:22.370692+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:23.370839+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:35:58 compute-0 ceph-osd[87731]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:35:58 compute-0 ceph-osd[87731]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3408017 data_alloc: 251658240 data_used: 28898270
Dec 13 04:35:58 compute-0 ceph-osd[87731]: osd.2 493 heartbeat osd_stat(store_statfs(0x4f3761000/0x0/0x4ffc00000, data 0x4facffc/0x524b000, compress 0x0/0x0/0x0, omap 0x6eb67, meta 0x71e1499), peers [0,1] op hist [])
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203792384 unmapped: 22183936 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:24.370955+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'config diff' '{prefix=config diff}'
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 203939840 unmapped: 22036480 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'config show' '{prefix=config show}'
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:25.371136+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 204161024 unmapped: 21815296 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:26.371313+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: prioritycache tune_memory target: 4294967296 mapped: 204120064 unmapped: 21856256 heap: 225976320 old mem: 2845415832 new mem: 2845415832
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: tick
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_tickets
Dec 13 04:35:58 compute-0 ceph-osd[87731]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:27.371479+0000)
Dec 13 04:35:58 compute-0 ceph-osd[87731]: do_command 'log dump' '{prefix=log dump}'
Dec 13 04:35:58 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19254 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} v 0)
Dec 13 04:35:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:35:58 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:58 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:35:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Dec 13 04:35:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174799724' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 04:35:58 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} v 0)
Dec 13 04:35:58 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='client.19248 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='client.19246 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='client.19250 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1555539242' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:35:58 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4174799724' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Dec 13 04:35:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641965230' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:35:59 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:35:59 compute-0 nova_compute[243704]: 2025-12-13 04:35:59.494 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:35:59 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Dec 13 04:35:59 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1814226460' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.19254 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: pgmap v2051: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.19258 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2641965230' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.19262 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.19266 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:35:59 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1814226460' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Dec 13 04:36:00 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:36:00 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19268 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:00 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Dec 13 04:36:00 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/445810403' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:36:00 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19272 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:01 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Dec 13 04:36:01 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2933938703' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:36:01 compute-0 anacron[234588]: Job `cron.daily' started
Dec 13 04:36:01 compute-0 anacron[234588]: Job `cron.daily' terminated
Dec 13 04:36:01 compute-0 ceph-mon[75071]: pgmap v2052: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Dec 13 04:36:01 compute-0 ceph-mon[75071]: from='client.19268 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:01 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/445810403' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Dec 13 04:36:01 compute-0 ceph-mon[75071]: from='client.19272 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:01 compute-0 podman[288068]: 2025-12-13 04:36:01.964959312 +0000 UTC m=+0.107895235 container health_status 1962f2815ba9eb5307766553310d78b02ea79163e7c1bb27d26e20d8a31d639b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec 13 04:36:02 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19276 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:02 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:36:02 compute-0 crontab[288191]: (root) LIST (root)
Dec 13 04:36:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Dec 13 04:36:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3376870479' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:36:02 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19280 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2933938703' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Dec 13 04:36:02 compute-0 ceph-mon[75071]: from='client.19276 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:02 compute-0 ceph-mon[75071]: pgmap v2053: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:36:02 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3376870479' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Dec 13 04:36:02 compute-0 ceph-mon[75071]: from='client.19280 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:02 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Dec 13 04:36:02 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1836628913' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 04:36:02 compute-0 nova_compute[243704]: 2025-12-13 04:36:02.996 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:16.889619+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f8147000/0x0/0x4ffc00000, data 0x3c0f548/0x3d45000, compress 0x0/0x0/0x0, omap 0x28c78, meta 0x3d47388), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8515000 session 0x5637d8a59500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637da271400 session 0x5637d648c1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 139288576 unmapped: 6569984 heap: 145858560 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:17.889750+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 139550720 unmapped: 6307840 heap: 145858560 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:18.889910+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148414464 unmapped: 589824 heap: 149004288 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:19.890129+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148414464 unmapped: 589824 heap: 149004288 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.477975845s of 11.694519043s, submitted: 45
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435400 session 0x5637d5790700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:20.890272+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148553728 unmapped: 450560 heap: 149004288 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1929423 data_alloc: 251658240 data_used: 38987062
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f8121000/0x0/0x4ffc00000, data 0x3c335ba/0x3d6b000, compress 0x0/0x0/0x0, omap 0x285d0, meta 0x3d47a30), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:21.890405+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 1515520 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f8121000/0x0/0x4ffc00000, data 0x3c335ba/0x3d6b000, compress 0x0/0x0/0x0, omap 0x285d0, meta 0x3d47a30), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:22.890550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435400 session 0x5637d6332c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57ea800 session 0x5637d8d04380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156098560 unmapped: 1294336 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:23.890666+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435c00 session 0x5637d55bba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435800 session 0x5637d8250e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148922368 unmapped: 8470528 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57eac00 session 0x5637d55bb500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:24.890792+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 8732672 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:25.890951+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 8732672 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1908725 data_alloc: 251658240 data_used: 33295670
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:26.891080+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435400 session 0x5637d8250700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f7e54000/0x0/0x4ffc00000, data 0x3f02548/0x4038000, compress 0x0/0x0/0x0, omap 0x278a4, meta 0x3d4875c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148660224 unmapped: 8732672 heap: 157392896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:27.891219+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57ea800 session 0x5637d78d3180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149716992 unmapped: 11886592 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435800 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435c00 session 0x5637d6361a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:28.891327+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149749760 unmapped: 11853824 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:29.891466+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8515000 session 0x5637d6332540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f7c4f000/0x0/0x4ffc00000, data 0x4106748/0x423d000, compress 0x0/0x0/0x0, omap 0x279bc, meta 0x3d48644), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 11821056 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57eb400 session 0x5637d8011a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:30.891610+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.045787811s of 10.363247871s, submitted: 207
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57ea800 session 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f7c4d000/0x0/0x4ffc00000, data 0x41067ba/0x423f000, compress 0x0/0x0/0x0, omap 0x278e5, meta 0x3d4871b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 11812864 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1921536 data_alloc: 251658240 data_used: 33988406
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:31.891730+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435400 session 0x5637d8a65500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435800 session 0x5637d63dfc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 11812864 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:32.892017+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149790720 unmapped: 11812864 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435c00 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:33.892205+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149807104 unmapped: 11796480 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d57eb400 session 0x5637d6333500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:34.892335+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435400 session 0x5637d57ad6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637d8435800 session 0x5637d6332700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 ms_handle_reset con 0x5637da271400 session 0x5637d592d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149823488 unmapped: 11780096 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 heartbeat osd_stat(store_statfs(0x4f7f0c000/0x0/0x4ffc00000, data 0x3e4a548/0x3f80000, compress 0x0/0x0/0x0, omap 0x27660, meta 0x3d489a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:35.892443+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 ms_handle_reset con 0x5637d784e000 session 0x5637d5f43a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 142950400 unmapped: 18653184 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 ms_handle_reset con 0x5637d57eb400 session 0x5637d525ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1810662 data_alloc: 234881024 data_used: 25640230
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:36.892568+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 ms_handle_reset con 0x5637d8435400 session 0x5637d5862700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8435800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 ms_handle_reset con 0x5637d8435800 session 0x5637d6344fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 142704640 unmapped: 18898944 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 ms_handle_reset con 0x5637d57ea800 session 0x5637d87aee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:37.892744+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 208 handle_osd_map epochs [209,209], i have 208, src has [1,209]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637da271400 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d57ea800 session 0x5637d648ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 142729216 unmapped: 18874368 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:38.892929+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d57eb000 session 0x5637d5791880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d5770c00 session 0x5637d636ea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d57eb800 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637da270400 session 0x5637d63ea000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d5770c00 session 0x5637d55661c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d57ea800 session 0x5637d592c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 36585472 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 heartbeat osd_stat(store_statfs(0x4f88de000/0x0/0x4ffc00000, data 0x3475cd2/0x35ac000, compress 0x0/0x0/0x0, omap 0x27967, meta 0x3d48699), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 ms_handle_reset con 0x5637d57eb000 session 0x5637d5791c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:39.893093+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 36585472 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:40.893218+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 36585472 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1560485 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:41.893337+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.924256325s of 11.113039970s, submitted: 127
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb800 session 0x5637d7e8e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637da271400 session 0x5637d55ba1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d5770c00 session 0x5637d7cfea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126066688 unmapped: 35536896 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57ea800 session 0x5637d8010a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:42.893605+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb000 session 0x5637d58b4380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb800 session 0x5637d5262a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2eec4, meta 0x3d4113c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:43.893724+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:44.893945+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:45.894130+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1561943 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:46.894335+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:47.894483+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:48.894635+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:49.894802+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:50.895032+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1561943 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:51.895455+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:52.895605+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:53.896087+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:54.896766+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:55.897904+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1561943 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:56.898213+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:57.899185+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:58.900033+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:59.900789+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f177, meta 0x3d40e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:00.901135+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:01.901431+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1561943 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:02.901888+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126058496 unmapped: 35545088 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 21.476696014s of 21.715827942s, submitted: 46
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:03.902018+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d784e400 session 0x5637d5229dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 36380672 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:04.902399+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46c000/0x0/0x4ffc00000, data 0x18ea7bb/0x1a20000, compress 0x0/0x0/0x0, omap 0x2f203, meta 0x3d40dfd), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 36380672 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:05.902785+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46c000/0x0/0x4ffc00000, data 0x18ea7bb/0x1a20000, compress 0x0/0x0/0x0, omap 0x2f203, meta 0x3d40dfd), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d5770c00 session 0x5637d63616c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125222912 unmapped: 36380672 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:06.903014+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1563649 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57ea800 session 0x5637d5566fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 36372480 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:07.903179+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125231104 unmapped: 36372480 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:08.903569+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb000 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 36356096 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:09.903851+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46b000/0x0/0x4ffc00000, data 0x18ea7cb/0x1a21000, compress 0x0/0x0/0x0, omap 0x2f31b, meta 0x3d40ce5), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125247488 unmapped: 36356096 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:10.904253+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb800 session 0x5637d636f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d784e400 session 0x5637d89bcc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:11.904535+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1564861 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:12.904702+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:13.904946+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:14.905249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f3a7, meta 0x3d40c59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f3a7, meta 0x3d40c59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:15.905451+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:16.905711+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1564861 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:17.905850+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:18.905974+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f3a7, meta 0x3d40c59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:19.906201+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f3a7, meta 0x3d40c59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:20.906363+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4fa46d000/0x0/0x4ffc00000, data 0x18ea759/0x1a1f000, compress 0x0/0x0/0x0, omap 0x2f3a7, meta 0x3d40c59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.631978989s of 17.759019852s, submitted: 22
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d5770c00 session 0x5637d5791a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125263872 unmapped: 36339712 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:21.906530+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57ea800 session 0x5637d74c5dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1566909 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124960768 unmapped: 36642816 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:22.906729+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb800 session 0x5637d651ea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 36618240 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:23.906990+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 36618240 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:24.907149+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124985344 unmapped: 36618240 heap: 161603584 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:25.907296+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d88e7c00 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d784f400 session 0x5637d636f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d5770c00 session 0x5637d5f43c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d88e7c00 session 0x5637d8320540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb000 session 0x5637d6333a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125886464 unmapped: 40493056 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:26.907538+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb800 session 0x5637d63eaa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1615075 data_alloc: 218103808 data_used: 6843588
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 heartbeat osd_stat(store_statfs(0x4f9ed1000/0x0/0x4ffc00000, data 0x1e84779/0x1fbb000, compress 0x0/0x0/0x0, omap 0x2fe01, meta 0x3d401ff), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d5770c00 session 0x5637d5791340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57ea800 session 0x5637d5262700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125886464 unmapped: 40493056 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:27.907705+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125886464 unmapped: 40493056 heap: 166379520 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:28.908245+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d57eb000 session 0x5637d5229500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 ms_handle_reset con 0x5637d784f400 session 0x5637d63eba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 210 handle_osd_map epochs [210,211], i have 211, src has [1,211]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 211 ms_handle_reset con 0x5637d88e7800 session 0x5637d5f64fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125378560 unmapped: 45203456 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:29.908401+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 211 handle_osd_map epochs [211,212], i have 211, src has [1,212]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 ms_handle_reset con 0x5637d88e7400 session 0x5637d842c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 ms_handle_reset con 0x5637d5770c00 session 0x5637d592cfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 ms_handle_reset con 0x5637d88e7c00 session 0x5637d63dfa40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 45178880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:30.908534+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 45178880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:31.908893+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1678250 data_alloc: 218103808 data_used: 6843604
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.703413963s of 10.979937553s, submitted: 100
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 ms_handle_reset con 0x5637d57ea800 session 0x5637d79c8fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 heartbeat osd_stat(store_statfs(0x4f96f4000/0x0/0x4ffc00000, data 0x26585ba/0x2794000, compress 0x0/0x0/0x0, omap 0x30754, meta 0x3d3f8ac), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:32.909155+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 45178880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:33.909305+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 45178880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:34.909447+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125403136 unmapped: 45178880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 ms_handle_reset con 0x5637d784f400 session 0x5637d525d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:35.909603+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124141568 unmapped: 46440448 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 212 handle_osd_map epochs [212,213], i have 213, src has [1,213]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7000 session 0x5637d592b880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:36.909751+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d5770c00 session 0x5637d8552c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57eb000 session 0x5637d636e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1687724 data_alloc: 218103808 data_used: 6845209
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:37.909995+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30aec, meta 0x3d3f514), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30aec, meta 0x3d3f514), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:38.910158+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:39.910364+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:40.910487+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:41.910607+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1688108 data_alloc: 218103808 data_used: 6845817
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30aec, meta 0x3d3f514), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:42.910717+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30aec, meta 0x3d3f514), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:43.910856+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124157952 unmapped: 46424064 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30aec, meta 0x3d3f514), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:44.911085+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 46415872 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:45.911230+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 46415872 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:46.911547+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124166144 unmapped: 46415872 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1688620 data_alloc: 218103808 data_used: 7101817
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.479763031s of 14.497175217s, submitted: 11
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7400 session 0x5637d8321500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:47.911772+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124190720 unmapped: 46391296 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57ea800 session 0x5637d636fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d5770c00 session 0x5637d6360380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f96a4000/0x0/0x4ffc00000, data 0x26a81b8/0x27e6000, compress 0x0/0x0/0x0, omap 0x30b76, meta 0x3d3f48a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:48.912030+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124141568 unmapped: 46440448 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57eb000 session 0x5637d648c380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:49.912308+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124141568 unmapped: 46440448 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7000 session 0x5637d5f65880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7400 session 0x5637d63608c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:50.912438+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7c00 session 0x5637d8552a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124133376 unmapped: 46448640 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d5770c00 session 0x5637d5567dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57eb000 session 0x5637d5e4aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:51.912558+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 132464640 unmapped: 38117376 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7400 session 0x5637d57ad340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1739571 data_alloc: 218103808 data_used: 6845209
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6c00 session 0x5637d8618380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:52.912718+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 46743552 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:53.912859+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123838464 unmapped: 46743552 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6800 session 0x5637d52636c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f8d0c000/0x0/0x4ffc00000, data 0x304220a/0x3180000, compress 0x0/0x0/0x0, omap 0x30d42, meta 0x3d3f2be), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:54.912978+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6800 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123797504 unmapped: 46784512 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d5770c00 session 0x5637d6332000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57eb000 session 0x5637d79c96c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:55.913124+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123797504 unmapped: 46784512 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6c00 session 0x5637d636f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7400 session 0x5637d6333dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e7400 session 0x5637d57376c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:56.913258+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 46473216 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f8d0c000/0x0/0x4ffc00000, data 0x304220a/0x3180000, compress 0x0/0x0/0x0, omap 0x30dcc, meta 0x3d3f234), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1743922 data_alloc: 218103808 data_used: 6962457
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.003720284s of 10.164169312s, submitted: 46
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:57.913328+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124108800 unmapped: 46473216 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6800 session 0x5637d5263c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6c00 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:58.913450+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6400 session 0x5637d5262e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57cd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 42082304 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57cd400 session 0x5637d86188c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:59.913620+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d57eb000 session 0x5637d78d3c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d5770c00 session 0x5637d78d3880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 heartbeat osd_stat(store_statfs(0x4f8ab3000/0x0/0x4ffc00000, data 0x329a21a/0x33d9000, compress 0x0/0x0/0x0, omap 0x313ff, meta 0x3d3ec01), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128499712 unmapped: 42082304 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 ms_handle_reset con 0x5637d88e6800 session 0x5637d78d3340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6400 session 0x5637d63eb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:00.913762+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f8ab3000/0x0/0x4ffc00000, data 0x329a21a/0x33d9000, compress 0x0/0x0/0x0, omap 0x313ff, meta 0x3d3ec01), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 48152576 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:01.913933+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 48152576 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1711306 data_alloc: 218103808 data_used: 6962457
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:02.914109+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 122429440 unmapped: 48152576 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:03.914222+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 47153152 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:04.914309+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123428864 unmapped: 47153152 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f8f46000/0x0/0x4ffc00000, data 0x2e49d44/0x2f40000, compress 0x0/0x0/0x0, omap 0x31de7, meta 0x3d3e219), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:05.914449+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 46071808 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:06.914608+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 46071808 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1762978 data_alloc: 218103808 data_used: 6986935
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:07.914755+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124510208 unmapped: 46071808 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.809016228s of 11.194635391s, submitted: 145
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e7400 session 0x5637d7cfe1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6c00 session 0x5637d5e4bdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:08.915822+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d5770c00 session 0x5637d86181c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d57eb000 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6400 session 0x5637d63dec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6800 session 0x5637d63eb880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6800 session 0x5637d86196c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 46342144 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f83bb000/0x0/0x4ffc00000, data 0x39d8db6/0x3ad1000, compress 0x0/0x0/0x0, omap 0x3247b, meta 0x3d3db85), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:09.915997+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124239872 unmapped: 46342144 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:10.916100+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 46202880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f8397000/0x0/0x4ffc00000, data 0x39fcdb6/0x3af5000, compress 0x0/0x0/0x0, omap 0x3247b, meta 0x3d3db85), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:11.916228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124379136 unmapped: 46202880 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1827043 data_alloc: 218103808 data_used: 6986935
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e7000 session 0x5637d5e4b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d5770c00 session 0x5637d6344a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:12.916371+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 46194688 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d57eb000 session 0x5637d6344540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6400 session 0x5637d6344fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:13.916514+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d5770c00 session 0x5637d63eb340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124387328 unmapped: 46194688 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d57eb000 session 0x5637d63ea540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f8397000/0x0/0x4ffc00000, data 0x39fcdb6/0x3af5000, compress 0x0/0x0/0x0, omap 0x32567, meta 0x3d3da99), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6400 session 0x5637d63ea700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1801.6 total, 600.0 interval
                                           Cumulative writes: 13K writes, 50K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 13K writes, 4148 syncs, 3.35 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5426 writes, 15K keys, 5426 commit groups, 1.0 writes per commit group, ingest: 12.42 MB, 0.02 MB/s
                                           Interval WAL: 5426 writes, 2344 syncs, 2.31 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e6800 session 0x5637d78d6e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:14.916934+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d88e7000 session 0x5637d525d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 ms_handle_reset con 0x5637d5770c00 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 123887616 unmapped: 46694400 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 heartbeat osd_stat(store_statfs(0x4f834e000/0x0/0x4ffc00000, data 0x3a44dd9/0x3b3e000, compress 0x0/0x0/0x0, omap 0x32567, meta 0x3d3da99), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:15.917078+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 215 ms_handle_reset con 0x5637d88e6c00 session 0x5637d5262c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 215 ms_handle_reset con 0x5637d89fc400 session 0x5637d8619c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124329984 unmapped: 46252032 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:16.917193+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d89fc800 session 0x5637d636f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 129212416 unmapped: 41369600 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d89fc000 session 0x5637d5262380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1778390 data_alloc: 234881024 data_used: 17773239
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d89fcc00 session 0x5637d5e4ac40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d5770c00 session 0x5637d55ba1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d57eb000 session 0x5637d8619a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d88e6400 session 0x5637d5262a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:17.917297+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d88e6c00 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d88e6800 session 0x5637d525ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d88e7000 session 0x5637d86196c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127041536 unmapped: 43540480 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 heartbeat osd_stat(store_statfs(0x4fa200000/0x0/0x4ffc00000, data 0x1b4cfe7/0x1c8c000, compress 0x0/0x0/0x0, omap 0x335d9, meta 0x3d3ca27), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.789941788s of 10.044569969s, submitted: 148
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 ms_handle_reset con 0x5637d5770c00 session 0x5637d592cfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:18.917431+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 43491328 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:19.917560+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 43491328 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:20.917644+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 216 handle_osd_map epochs [216,217], i have 216, src has [1,217]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 217 ms_handle_reset con 0x5637d57eb000 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127090688 unmapped: 43491328 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 217 ms_handle_reset con 0x5637d88e6400 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:21.917779+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126853120 unmapped: 43728896 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1665659 data_alloc: 218103808 data_used: 9070655
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:22.918262+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126853120 unmapped: 43728896 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 217 ms_handle_reset con 0x5637d5770c00 session 0x5637d78d3c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc ms_handle_reset ms_handle_reset con 0x5637d5515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3514601685
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3514601685,v1:192.168.122.100:6801/3514601685]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: get_auth_request con 0x5637d88e6400 auth_method 0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:23.918393+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 217 handle_osd_map epochs [217,218], i have 218, src has [1,218]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fcc00 session 0x5637d6332000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126459904 unmapped: 44122112 heap: 170582016 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fc400 session 0x5637d63eb340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 heartbeat osd_stat(store_statfs(0x4fa21f000/0x0/0x4ffc00000, data 0x1b2c5f9/0x1c6b000, compress 0x0/0x0/0x0, omap 0x341af, meta 0x3d3be51), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fc800 session 0x5637d78d3880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fd000 session 0x5637d6344000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:24.918556+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124698624 unmapped: 54280192 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d5515c00 session 0x5637d57361c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5770c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:25.918730+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d6510c00 session 0x5637d8250000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d69cbc00 session 0x5637d5f656c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124723200 unmapped: 54255616 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:26.918857+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 124854272 unmapped: 54124544 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 heartbeat osd_stat(store_statfs(0x4f6c53000/0x0/0x4ffc00000, data 0x50f85f9/0x5237000, compress 0x0/0x0/0x0, omap 0x34591, meta 0x3d3ba6f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2058120 data_alloc: 218103808 data_used: 6851647
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d69cb400 session 0x5637d89bd6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:27.919003+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fc800 session 0x5637d58b56c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125575168 unmapped: 53403648 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.351563454s of 10.042468071s, submitted: 120
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:28.919109+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 53346304 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 ms_handle_reset con 0x5637d89fcc00 session 0x5637d7cfe1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:29.919327+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 134193152 unmapped: 44785664 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:30.919484+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 218 handle_osd_map epochs [218,219], i have 218, src has [1,219]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126025728 unmapped: 52953088 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:31.919636+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 heartbeat osd_stat(store_statfs(0x4ef44e000/0x0/0x4ffc00000, data 0xc8fa0ea/0xca3c000, compress 0x0/0x0/0x0, omap 0x34897, meta 0x3d3b769), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 126115840 unmapped: 52862976 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d89fd800 session 0x5637d636e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2654136 data_alloc: 218103808 data_used: 6851647
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d89fdc00 session 0x5637d80116c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:32.919753+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d89fdc00 session 0x5637d5f641c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d69cb400 session 0x5637d636e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127811584 unmapped: 51167232 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:33.919880+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 127860736 unmapped: 51118080 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:34.920018+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d89fc800 session 0x5637d63de380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128098304 unmapped: 50880512 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 heartbeat osd_stat(store_statfs(0x4e9a67000/0x0/0x4ffc00000, data 0x122e214c/0x12425000, compress 0x0/0x0/0x0, omap 0x34cd7, meta 0x3d3b329), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:35.920174+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128196608 unmapped: 50782208 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:36.920291+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 136798208 unmapped: 42180608 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360528 data_alloc: 218103808 data_used: 6851919
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 ms_handle_reset con 0x5637d89fd800 session 0x5637d5567dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 heartbeat osd_stat(store_statfs(0x4e7267000/0x0/0x4ffc00000, data 0x14ae214c/0x14c25000, compress 0x0/0x0/0x0, omap 0x34cd7, meta 0x3d3b329), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:37.920430+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128679936 unmapped: 50298880 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:38.920587+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.075103760s of 10.601224899s, submitted: 97
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 50020352 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 220 ms_handle_reset con 0x5637d975e000 session 0x5637d89bcc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:39.920739+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 129089536 unmapped: 49889280 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 220 handle_osd_map epochs [221,221], i have 220, src has [1,221]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 221 ms_handle_reset con 0x5637d975e400 session 0x5637d8010a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 221 ms_handle_reset con 0x5637d89fcc00 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:40.920902+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 131366912 unmapped: 47611904 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:41.921032+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 131489792 unmapped: 47489024 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686183 data_alloc: 218103808 data_used: 6852115
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 221 heartbeat osd_stat(store_statfs(0x4e2260000/0x0/0x4ffc00000, data 0x19ae58e6/0x19c2c000, compress 0x0/0x0/0x0, omap 0x3577f, meta 0x3d3a881), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 221 handle_osd_map epochs [222,222], i have 221, src has [1,222]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:42.921198+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 222 ms_handle_reset con 0x5637d69cb400 session 0x5637d8619180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 131792896 unmapped: 47185920 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 222 heartbeat osd_stat(store_statfs(0x4dfa5c000/0x0/0x4ffc00000, data 0x1c2e7474/0x1c42e000, compress 0x0/0x0/0x0, omap 0x35bdb, meta 0x3d3a425), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:43.921372+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 222 ms_handle_reset con 0x5637d89fc800 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 131874816 unmapped: 47104000 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:44.921516+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 132030464 unmapped: 46948352 heap: 178978816 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:45.921677+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 223 ms_handle_reset con 0x5637d89fd400 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 55156736 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:46.921809+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 223 ms_handle_reset con 0x5637d69cb400 session 0x5637d6344000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 132243456 unmapped: 55132160 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4193294 data_alloc: 218103808 data_used: 6852017
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 223 ms_handle_reset con 0x5637d89fc800 session 0x5637d58b56c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 223 heartbeat osd_stat(store_statfs(0x4dc257000/0x0/0x4ffc00000, data 0x1fae90ba/0x1fc33000, compress 0x0/0x0/0x0, omap 0x36177, meta 0x3d39e89), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:47.921973+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 224 ms_handle_reset con 0x5637d975e400 session 0x5637d63608c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 224 ms_handle_reset con 0x5637d89fd800 session 0x5637d592d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 132276224 unmapped: 55099392 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 224 ms_handle_reset con 0x5637d975e800 session 0x5637d636f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 224 ms_handle_reset con 0x5637d69cb400 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:48.922125+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 224 handle_osd_map epochs [225,225], i have 224, src has [1,225]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fcc00 session 0x5637d89bca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fc800 session 0x5637d525d880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fd400 session 0x5637d63eb340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.210040092s of 10.001689911s, submitted: 196
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fdc00 session 0x5637d79c9180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69cb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 130793472 unmapped: 56582144 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fcc00 session 0x5637d5f656c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 ms_handle_reset con 0x5637d89fd400 session 0x5637d78d3880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:49.922291+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 130727936 unmapped: 56647680 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 heartbeat osd_stat(store_statfs(0x4f9a4c000/0x0/0x4ffc00000, data 0x22eceff/0x243c000, compress 0x0/0x0/0x0, omap 0x36d2d, meta 0x3d392d3), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:50.922441+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 ms_handle_reset con 0x5637d89fd800 session 0x5637d89bc700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 ms_handle_reset con 0x5637d975e400 session 0x5637d5e4ae00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135094272 unmapped: 52281344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 ms_handle_reset con 0x5637d975ec00 session 0x5637d63448c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:51.922606+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 heartbeat osd_stat(store_statfs(0x4f9a4e000/0x0/0x4ffc00000, data 0x22ee555/0x243c000, compress 0x0/0x0/0x0, omap 0x370f7, meta 0x3d38f09), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 134864896 unmapped: 52510720 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1891144 data_alloc: 234881024 data_used: 17141798
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:52.922784+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 134881280 unmapped: 52494336 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:53.922978+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 ms_handle_reset con 0x5637d89fd400 session 0x5637d5263c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 52469760 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 226 handle_osd_map epochs [227,227], i have 226, src has [1,227]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 227 ms_handle_reset con 0x5637d89fd800 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:54.923120+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135954432 unmapped: 51421184 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 228 ms_handle_reset con 0x5637d975e400 session 0x5637d525d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975f000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 228 ms_handle_reset con 0x5637d975f000 session 0x5637d636fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 228 ms_handle_reset con 0x5637d89fcc00 session 0x5637d63eba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:55.923287+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135962624 unmapped: 51412992 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:56.923420+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 228 heartbeat osd_stat(store_statfs(0x4f9a43000/0x0/0x4ffc00000, data 0x22f22b3/0x2445000, compress 0x0/0x0/0x0, omap 0x37567, meta 0x3d38a99), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 51396608 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1904355 data_alloc: 234881024 data_used: 17142768
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 229 ms_handle_reset con 0x5637d89fd400 session 0x5637d63eb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:57.923547+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 229 heartbeat osd_stat(store_statfs(0x4f9a42000/0x0/0x4ffc00000, data 0x22f3d6a/0x2448000, compress 0x0/0x0/0x0, omap 0x379be, meta 0x3d38642), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 51396608 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 230 ms_handle_reset con 0x5637d975e400 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:58.923736+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 51396608 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.461005211s of 10.561563492s, submitted: 80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 231 ms_handle_reset con 0x5637d89fd800 session 0x5637d8010000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 231 ms_handle_reset con 0x5637d89fcc00 session 0x5637d57ac700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:59.923951+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 51396608 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:00.924124+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975f000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 51003392 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:01.924284+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 231 heartbeat osd_stat(store_statfs(0x4f9335000/0x0/0x4ffc00000, data 0x29f5576/0x2b4f000, compress 0x0/0x0/0x0, omap 0x37e35, meta 0x3d381cb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 231 handle_osd_map epochs [231,232], i have 231, src has [1,232]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 232 ms_handle_reset con 0x5637d975f400 session 0x5637d5e4b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145637376 unmapped: 41738240 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1972550 data_alloc: 234881024 data_used: 18978557
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:02.924421+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 233 ms_handle_reset con 0x5637d89fcc00 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 233 ms_handle_reset con 0x5637d975f000 session 0x5637d63616c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 42369024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:03.926153+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 42369024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 233 ms_handle_reset con 0x5637d89fd400 session 0x5637d57ad880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:04.926516+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 233 ms_handle_reset con 0x5637d975e400 session 0x5637d6332000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 233 handle_osd_map epochs [233,234], i have 233, src has [1,234]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 234 ms_handle_reset con 0x5637d975f800 session 0x5637d58b4380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145006592 unmapped: 42369024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:05.926671+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 234 heartbeat osd_stat(store_statfs(0x4f92f3000/0x0/0x4ffc00000, data 0x2a2e8ac/0x2b8d000, compress 0x0/0x0/0x0, omap 0x385f5, meta 0x3d37a0b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 235 ms_handle_reset con 0x5637d975fc00 session 0x5637d55baa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 235 ms_handle_reset con 0x5637d89fd800 session 0x5637d6332540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 42950656 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:06.926816+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 235 heartbeat osd_stat(store_statfs(0x4f92f9000/0x0/0x4ffc00000, data 0x2a304aa/0x2b91000, compress 0x0/0x0/0x0, omap 0x38946, meta 0x3d376ba), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 42950656 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1985776 data_alloc: 234881024 data_used: 19293965
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:07.926975+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 235 ms_handle_reset con 0x5637d5775c00 session 0x5637d651efc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 236 ms_handle_reset con 0x5637d89fcc00 session 0x5637d8011dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 42950656 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:08.927304+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 237 ms_handle_reset con 0x5637d5775c00 session 0x5637d63dea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 42950656 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:09.928151+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.120603561s of 10.505020142s, submitted: 168
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144424960 unmapped: 42950656 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 238 ms_handle_reset con 0x5637d975e400 session 0x5637d6332a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:10.928309+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 238 handle_osd_map epochs [238,239], i have 238, src has [1,239]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 239 ms_handle_reset con 0x5637d5775c00 session 0x5637d592d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 239 ms_handle_reset con 0x5637d89fcc00 session 0x5637d63de540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145661952 unmapped: 41713664 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:11.928663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 239 heartbeat osd_stat(store_statfs(0x4f8fa2000/0x0/0x4ffc00000, data 0x2d81208/0x2ee4000, compress 0x0/0x0/0x0, omap 0x39297, meta 0x3d36d69), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 239 handle_osd_map epochs [240,240], i have 239, src has [1,240]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145743872 unmapped: 41631744 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2023189 data_alloc: 234881024 data_used: 19295049
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 240 ms_handle_reset con 0x5637d89fd800 session 0x5637d636f6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 240 ms_handle_reset con 0x5637d975e400 session 0x5637d6332e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:12.929123+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145752064 unmapped: 41623552 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:13.929384+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145793024 unmapped: 41582592 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975f000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 241 ms_handle_reset con 0x5637d975f000 session 0x5637d8619a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:14.930131+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 241 ms_handle_reset con 0x5637d5775c00 session 0x5637d525dc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 241 heartbeat osd_stat(store_statfs(0x4f8f24000/0x0/0x4ffc00000, data 0x2e05487/0x2f68000, compress 0x0/0x0/0x0, omap 0x39724, meta 0x3d368dc), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145809408 unmapped: 41566208 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:15.930269+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 243 ms_handle_reset con 0x5637d89fcc00 session 0x5637d57adc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145825792 unmapped: 41549824 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 243 ms_handle_reset con 0x5637d89fd800 session 0x5637d57ac8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:16.930420+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 243 handle_osd_map epochs [243,244], i have 243, src has [1,244]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 244 ms_handle_reset con 0x5637d975fc00 session 0x5637d525d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975e400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 244 heartbeat osd_stat(store_statfs(0x4f8f15000/0x0/0x4ffc00000, data 0x2e0a86c/0x2f71000, compress 0x0/0x0/0x0, omap 0x39f9a, meta 0x3d36066), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 244 ms_handle_reset con 0x5637d975e400 session 0x5637d58b56c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145907712 unmapped: 41467904 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2034730 data_alloc: 234881024 data_used: 19296774
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:17.930574+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 38412288 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:18.930701+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 244 handle_osd_map epochs [245,245], i have 244, src has [1,245]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 245 ms_handle_reset con 0x5637d89fcc00 session 0x5637d592b6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 38412288 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 245 ms_handle_reset con 0x5637d89fd800 session 0x5637d6345180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:19.930914+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 38412288 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 245 heartbeat osd_stat(store_statfs(0x4f8f17000/0x0/0x4ffc00000, data 0x2e0c44c/0x2f73000, compress 0x0/0x0/0x0, omap 0x3a407, meta 0x3d35bf9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.975224495s of 10.657504082s, submitted: 281
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:20.931148+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 245 handle_osd_map epochs [246,246], i have 245, src has [1,246]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 ms_handle_reset con 0x5637d975fc00 session 0x5637d651fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 ms_handle_reset con 0x5637d89fdc00 session 0x5637d6332c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 38346752 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8f19000/0x0/0x4ffc00000, data 0x2e0c44c/0x2f73000, compress 0x0/0x0/0x0, omap 0x3a7d0, meta 0x3d35830), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:21.931324+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 38346752 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2060294 data_alloc: 234881024 data_used: 23063460
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:22.931587+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 38346752 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:23.931739+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149028864 unmapped: 38346752 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 heartbeat osd_stat(store_statfs(0x4f8f15000/0x0/0x4ffc00000, data 0x2e0e012/0x2f75000, compress 0x0/0x0/0x0, omap 0x3a9f4, meta 0x3d3560c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:24.931885+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149061632 unmapped: 38313984 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:25.932103+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 246 handle_osd_map epochs [246,247], i have 247, src has [1,247]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149061632 unmapped: 38313984 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 247 ms_handle_reset con 0x5637d87afc00 session 0x5637d648d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:26.932240+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 248 ms_handle_reset con 0x5637d7c72000 session 0x5637d63eb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 248 ms_handle_reset con 0x5637d87afc00 session 0x5637d5263880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 38141952 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2069407 data_alloc: 234881024 data_used: 23067556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:27.932380+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 149233664 unmapped: 38141952 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:28.932490+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 150790144 unmapped: 36585472 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 249 ms_handle_reset con 0x5637d89fd800 session 0x5637d636f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:29.932658+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 249 heartbeat osd_stat(store_statfs(0x4f840b000/0x0/0x4ffc00000, data 0x39147a0/0x3a7f000, compress 0x0/0x0/0x0, omap 0x3b446, meta 0x3d34bba), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151445504 unmapped: 35930112 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d89fcc00 session 0x5637d79c8a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:30.932788+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.368524551s of 10.698327065s, submitted: 190
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d89fdc00 session 0x5637d57ad340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151478272 unmapped: 35897344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d7c72000 session 0x5637d5262e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:31.932912+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d87afc00 session 0x5637d57ad6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d89fcc00 session 0x5637d79c8a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 ms_handle_reset con 0x5637d89fd800 session 0x5637d592b6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 heartbeat osd_stat(store_statfs(0x4f837d000/0x0/0x4ffc00000, data 0x399b01e/0x3b0d000, compress 0x0/0x0/0x0, omap 0x3bb00, meta 0x3d34500), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151494656 unmapped: 35880960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2159031 data_alloc: 234881024 data_used: 23169956
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:32.933093+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 34619392 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 250 handle_osd_map epochs [251,251], i have 250, src has [1,251]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:33.933205+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 34603008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f8376000/0x0/0x4ffc00000, data 0x3ed2c1e/0x3b14000, compress 0x0/0x0/0x0, omap 0x3c5eb, meta 0x3d33a15), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:34.933406+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87af800 session 0x5637d7cfee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d975fc00 session 0x5637d636f6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 34578432 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87af400 session 0x5637d8618fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:35.933578+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d7c72000 session 0x5637d6332540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151969792 unmapped: 35405824 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87afc00 session 0x5637d6360700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:36.933694+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d89fcc00 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151977984 unmapped: 35397632 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2209271 data_alloc: 234881024 data_used: 23178733
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:37.933826+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d7c72000 session 0x5637d8618000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87af400 session 0x5637d63eba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151986176 unmapped: 35389440 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:38.933934+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87afc00 session 0x5637d6361500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d89fcc00 session 0x5637d63336c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d975fc00 session 0x5637d648da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151986176 unmapped: 35389440 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:39.934118+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d7c72000 session 0x5637d63616c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 heartbeat osd_stat(store_statfs(0x4f835b000/0x0/0x4ffc00000, data 0x3ef2b9c/0x3b31000, compress 0x0/0x0/0x0, omap 0x3cbf9, meta 0x3d33407), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152018944 unmapped: 35356672 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87af400 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d87afc00 session 0x5637d89bd880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fcc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:40.934220+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d89fd800 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 ms_handle_reset con 0x5637d89fcc00 session 0x5637d78d28c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.771027565s of 10.055007935s, submitted: 107
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152051712 unmapped: 35323904 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:41.934338+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 34553856 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2218658 data_alloc: 234881024 data_used: 24105453
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:42.934481+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 252 handle_osd_map epochs [253,253], i have 252, src has [1,253]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 253 ms_handle_reset con 0x5637d87afc00 session 0x5637d89bc8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 253 ms_handle_reset con 0x5637d89fd800 session 0x5637d636e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 253 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x3f00199/0x3b3f000, compress 0x0/0x0/0x0, omap 0x3d3a1, meta 0x3d32c5f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 34512896 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:43.934612+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 34512896 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:44.934725+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 34512896 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:45.934852+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 34512896 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:46.934988+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152862720 unmapped: 34512896 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2224126 data_alloc: 234881024 data_used: 24105725
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:47.935132+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 253 handle_osd_map epochs [253,254], i have 253, src has [1,254]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 ms_handle_reset con 0x5637d81e1000 session 0x5637d6097340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152928256 unmapped: 34447360 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x3f005bc/0x3b41000, compress 0x0/0x0/0x0, omap 0x3d3dd, meta 0x3d32c23), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:48.935277+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 heartbeat osd_stat(store_statfs(0x4f834b000/0x0/0x4ffc00000, data 0x3f005bc/0x3b41000, compress 0x0/0x0/0x0, omap 0x3d3dd, meta 0x3d32c23), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152928256 unmapped: 34447360 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:49.935427+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 ms_handle_reset con 0x5637d81e0800 session 0x5637d648d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 ms_handle_reset con 0x5637d8e64800 session 0x5637d78d21c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 ms_handle_reset con 0x5637d8e64c00 session 0x5637d5f65c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 ms_handle_reset con 0x5637d81e0800 session 0x5637d6097a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 34144256 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:50.935559+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 255 ms_handle_reset con 0x5637d81e1000 session 0x5637d5f421c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 255 ms_handle_reset con 0x5637d87afc00 session 0x5637d592dc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 255 heartbeat osd_stat(store_statfs(0x4f833e000/0x0/0x4ffc00000, data 0x3f06c45/0x3b4c000, compress 0x0/0x0/0x0, omap 0x3dbae, meta 0x3d32452), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153264128 unmapped: 34111488 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:51.935709+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d89fd800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.650744438s of 10.798401833s, submitted: 96
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153264128 unmapped: 34111488 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2235288 data_alloc: 234881024 data_used: 24105725
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:52.935841+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 255 handle_osd_map epochs [255,256], i have 256, src has [1,256]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 256 heartbeat osd_stat(store_statfs(0x4f8340000/0x0/0x4ffc00000, data 0x3f06bf3/0x3b4a000, compress 0x0/0x0/0x0, omap 0x3dbae, meta 0x3d32452), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 256 ms_handle_reset con 0x5637d89fd800 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153354240 unmapped: 34021376 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:53.935989+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 256 handle_osd_map epochs [256,257], i have 256, src has [1,257]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 ms_handle_reset con 0x5637d81e0800 session 0x5637d5263340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153821184 unmapped: 33554432 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:54.936113+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153944064 unmapped: 33431552 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:55.936279+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 ms_handle_reset con 0x5637d87afc00 session 0x5637d7cfea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x3f0a39b/0x3b50000, compress 0x0/0x0/0x0, omap 0x3e6d8, meta 0x3d31928), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 ms_handle_reset con 0x5637d8e64400 session 0x5637d55ba380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 ms_handle_reset con 0x5637d8e64c00 session 0x5637d8250380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 ms_handle_reset con 0x5637d8434c00 session 0x5637d63608c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154337280 unmapped: 33038336 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:56.936490+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 258 heartbeat osd_stat(store_statfs(0x4f833a000/0x0/0x4ffc00000, data 0x3f0a39b/0x3b50000, compress 0x0/0x0/0x0, omap 0x3e6d8, meta 0x3d31928), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 258 ms_handle_reset con 0x5637d87afc00 session 0x5637d63dfc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 258 ms_handle_reset con 0x5637d81e0800 session 0x5637d6344c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 32972800 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2259673 data_alloc: 234881024 data_used: 26701270
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:57.936663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 258 handle_osd_map epochs [258,259], i have 258, src has [1,259]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 259 ms_handle_reset con 0x5637d8e64c00 session 0x5637d648d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 259 ms_handle_reset con 0x5637d8e64400 session 0x5637d648ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 259 ms_handle_reset con 0x5637d8e64000 session 0x5637d5f43c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 259 ms_handle_reset con 0x5637d79abc00 session 0x5637d7e8f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154468352 unmapped: 32907264 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:58.936817+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 259 handle_osd_map epochs [260,260], i have 259, src has [1,260]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 260 ms_handle_reset con 0x5637d81e0800 session 0x5637d5863180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 260 ms_handle_reset con 0x5637d87afc00 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 260 ms_handle_reset con 0x5637d8e64400 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154517504 unmapped: 32858112 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:59.936994+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 260 handle_osd_map epochs [260,261], i have 260, src has [1,261]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 261 ms_handle_reset con 0x5637d8e64c00 session 0x5637d57ad340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 261 ms_handle_reset con 0x5637d81e1000 session 0x5637d7e8fa40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154583040 unmapped: 32792576 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:00.937114+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 261 handle_osd_map epochs [261,262], i have 261, src has [1,262]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d5775c00 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154607616 unmapped: 32768000 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:01.942670+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d81e0800 session 0x5637d6097880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d79abc00 session 0x5637d8618000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d8e64400 session 0x5637d7e8e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d87afc00 session 0x5637d648da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 ms_handle_reset con 0x5637d5775c00 session 0x5637d636e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153878528 unmapped: 33497088 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2163826 data_alloc: 234881024 data_used: 22824542
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:02.942821+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 heartbeat osd_stat(store_statfs(0x4f8e27000/0x0/0x4ffc00000, data 0x2f97dd8/0x2be6000, compress 0x0/0x0/0x0, omap 0x40194, meta 0x3d2fe6c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153878528 unmapped: 33497088 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:03.943197+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.221235275s of 11.667595863s, submitted: 164
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153911296 unmapped: 33464320 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:04.943308+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 262 handle_osd_map epochs [263,263], i have 262, src has [1,263]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 263 ms_handle_reset con 0x5637d79abc00 session 0x5637d78d2000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153911296 unmapped: 33464320 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:05.943448+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 263 handle_osd_map epochs [263,264], i have 263, src has [1,264]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153919488 unmapped: 33456128 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 264 ms_handle_reset con 0x5637d81e1000 session 0x5637d78d2700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 264 ms_handle_reset con 0x5637d81e0800 session 0x5637d6097340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:06.946094+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 ms_handle_reset con 0x5637d5775c00 session 0x5637d648c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2193390 data_alloc: 234881024 data_used: 22730837
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153927680 unmapped: 33447936 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:07.946457+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 ms_handle_reset con 0x5637d79abc00 session 0x5637d89bd880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 ms_handle_reset con 0x5637d81e1000 session 0x5637d8321a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 ms_handle_reset con 0x5637d87afc00 session 0x5637d6361500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 heartbeat osd_stat(store_statfs(0x4f80de000/0x0/0x4ffc00000, data 0x2fe3fef/0x2c0a000, compress 0x0/0x0/0x0, omap 0x40ed2, meta 0x4ecf12e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153944064 unmapped: 33431552 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:08.946603+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 ms_handle_reset con 0x5637d8e64400 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 33423360 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:09.946823+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 265 handle_osd_map epochs [266,266], i have 265, src has [1,266]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d5775c00 session 0x5637d6345500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d79abc00 session 0x5637d7e8f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d81e1000 session 0x5637d6360700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d87afc00 session 0x5637d5262fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:10.948115+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155140096 unmapped: 32235520 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d784e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d784e000 session 0x5637d636e380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 ms_handle_reset con 0x5637d5775c00 session 0x5637d6361500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d79abc00 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:11.948275+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155123712 unmapped: 32251904 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2193238 data_alloc: 234881024 data_used: 23255074
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:12.948424+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155123712 unmapped: 32251904 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 heartbeat osd_stat(store_statfs(0x4f80dc000/0x0/0x4ffc00000, data 0x2fe77ae/0x2c0c000, compress 0x0/0x0/0x0, omap 0x40a04, meta 0x4ecf5fc), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:13.948776+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155205632 unmapped: 32169984 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d81e1000 session 0x5637d6344c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87afc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e6000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d88e6000 session 0x5637d5863180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.386240005s of 10.219320297s, submitted: 144
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d87afc00 session 0x5637d63608c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d57ea400 session 0x5637d7e8e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:14.948920+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161390592 unmapped: 25985024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d5775c00 session 0x5637d5e4b340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 ms_handle_reset con 0x5637d79abc00 session 0x5637d6097880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:15.949321+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155582464 unmapped: 31793152 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 267 handle_osd_map epochs [267,268], i have 268, src has [1,268]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 268 ms_handle_reset con 0x5637d7c72000 session 0x5637d6332a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 268 ms_handle_reset con 0x5637d87af400 session 0x5637d86188c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:16.949457+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 31784960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 268 ms_handle_reset con 0x5637d87af400 session 0x5637d57ad340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2233996 data_alloc: 234881024 data_used: 23255074
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:17.950312+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155648000 unmapped: 31727616 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 268 handle_osd_map epochs [269,269], i have 268, src has [1,269]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 269 ms_handle_reset con 0x5637d57ea400 session 0x5637d7e8f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:18.950496+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155860992 unmapped: 31514624 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 269 ms_handle_reset con 0x5637d7c72000 session 0x5637d648cc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 269 handle_osd_map epochs [270,270], i have 269, src has [1,270]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d79abc00 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d81e1000 session 0x5637d6333500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d5775c00 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d57ea400 session 0x5637d7e8f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f76e0000/0x0/0x4ffc00000, data 0x39de9cc/0x3608000, compress 0x0/0x0/0x0, omap 0x407f7, meta 0x4ecf809), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:19.950733+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155869184 unmapped: 31506432 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 heartbeat osd_stat(store_statfs(0x4f76e0000/0x0/0x4ffc00000, data 0x39de9cc/0x3608000, compress 0x0/0x0/0x0, omap 0x407f7, meta 0x4ecf809), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d79abc00 session 0x5637d648ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 ms_handle_reset con 0x5637d81e1000 session 0x5637d7e8e380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:20.950890+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155877376 unmapped: 31498240 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d87af400 session 0x5637d57ad6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d7c72000 session 0x5637d648c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d5775c00 session 0x5637d6360700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d69cb400 session 0x5637d5262a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d89fc800 session 0x5637d5791340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:21.951277+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 152952832 unmapped: 34422784 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57ea400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79abc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d79abc00 session 0x5637d636ea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 ms_handle_reset con 0x5637d5775c00 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2270470 data_alloc: 234881024 data_used: 19519439
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:22.951486+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154009600 unmapped: 33366016 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 ms_handle_reset con 0x5637d57eb400 session 0x5637d592cfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da270c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 ms_handle_reset con 0x5637da270c00 session 0x5637d6361a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da270400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 ms_handle_reset con 0x5637da270400 session 0x5637d6361340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 ms_handle_reset con 0x5637d57ea400 session 0x5637d592c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 ms_handle_reset con 0x5637da271800 session 0x5637d6360a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:23.951827+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145702912 unmapped: 41672704 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.770442963s of 10.238311768s, submitted: 223
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57eb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:24.951998+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145629184 unmapped: 41746432 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 272 handle_osd_map epochs [272,273], i have 272, src has [1,273]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 ms_handle_reset con 0x5637d57eb400 session 0x5637d7e8fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 ms_handle_reset con 0x5637d5775c00 session 0x5637d63dfdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 ms_handle_reset con 0x5637d87cec00 session 0x5637d55ba8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cbb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 ms_handle_reset con 0x5637d7cbb800 session 0x5637d592b6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:25.952395+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145620992 unmapped: 41754624 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 heartbeat osd_stat(store_statfs(0x4f81f1000/0x0/0x4ffc00000, data 0x296be18/0x2af9000, compress 0x0/0x0/0x0, omap 0x3fe9c, meta 0x4ed0164), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 274 ms_handle_reset con 0x5637d87cec00 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 274 ms_handle_reset con 0x5637d87ce800 session 0x5637d525ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:26.952663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145637376 unmapped: 41738240 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 274 ms_handle_reset con 0x5637d87cf000 session 0x5637d6344a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5ecac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 275 ms_handle_reset con 0x5637d5775c00 session 0x5637d5f428c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:27.953201+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2020105 data_alloc: 218103808 data_used: 7403421
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145678336 unmapped: 41697280 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d5eca400 session 0x5637d89bc8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d5ecac00 session 0x5637d78d3dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d5775c00 session 0x5637d58621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d87ce800 session 0x5637d80621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:28.953375+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145702912 unmapped: 41672704 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d87cec00 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5ecb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:29.953684+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 ms_handle_reset con 0x5637d5ecb000 session 0x5637d79c8c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145842176 unmapped: 41533440 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 276 handle_osd_map epochs [277,277], i have 276, src has [1,277]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 277 ms_handle_reset con 0x5637d87cf000 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 277 heartbeat osd_stat(store_statfs(0x4f91fd000/0x0/0x4ffc00000, data 0x195eeff/0x1aed000, compress 0x0/0x0/0x0, omap 0x3f875, meta 0x4ed078b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:30.953833+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 277 ms_handle_reset con 0x5637d5775c00 session 0x5637d78d2a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5ecac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145866752 unmapped: 41508864 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 277 ms_handle_reset con 0x5637d87ce800 session 0x5637d78d2540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 277 handle_osd_map epochs [278,278], i have 277, src has [1,278]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:31.954089+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145883136 unmapped: 41492480 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57e6000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:32.954259+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1992542 data_alloc: 218103808 data_used: 7404340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145891328 unmapped: 41484288 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 278 ms_handle_reset con 0x5637d87cec00 session 0x5637d592afc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 278 ms_handle_reset con 0x5637d5ecac00 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:33.954394+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 278 ms_handle_reset con 0x5637d80b7000 session 0x5637d5863180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 42139648 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 279 ms_handle_reset con 0x5637d5775c00 session 0x5637d6360a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:34.954697+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 42139648 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.717664719s of 10.574617386s, submitted: 285
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 280 ms_handle_reset con 0x5637d87ce800 session 0x5637d78d28c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 280 ms_handle_reset con 0x5637d87cec00 session 0x5637d78d2380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 280 ms_handle_reset con 0x5637d57e6000 session 0x5637d5e4a540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:35.954938+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145244160 unmapped: 42131456 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 280 heartbeat osd_stat(store_statfs(0x4f91f1000/0x0/0x4ffc00000, data 0x1964738/0x1af7000, compress 0x0/0x0/0x0, omap 0x3f60a, meta 0x4ed09f6), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 280 handle_osd_map epochs [281,281], i have 280, src has [1,281]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 ms_handle_reset con 0x5637d5775c00 session 0x5637d7e8f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:36.955139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 42139648 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 ms_handle_reset con 0x5637d80b7000 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 heartbeat osd_stat(store_statfs(0x4f91ed000/0x0/0x4ffc00000, data 0x196628b/0x1afd000, compress 0x0/0x0/0x0, omap 0x3f994, meta 0x4ed066c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:37.955427+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2011813 data_alloc: 218103808 data_used: 6879954
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145235968 unmapped: 42139648 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:38.955563+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145227776 unmapped: 42147840 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 ms_handle_reset con 0x5637d87cf000 session 0x5637d5e4b6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 ms_handle_reset con 0x5637d8434000 session 0x5637d592ca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 ms_handle_reset con 0x5637d81e0c00 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 ms_handle_reset con 0x5637d5775c00 session 0x5637d79c8e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 ms_handle_reset con 0x5637d87cec00 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 ms_handle_reset con 0x5637d80b7000 session 0x5637d79c8380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 ms_handle_reset con 0x5637d8434000 session 0x5637d78d21c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:39.955727+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 ms_handle_reset con 0x5637d87ce800 session 0x5637d76bfc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145375232 unmapped: 42000384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:40.955969+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145383424 unmapped: 41992192 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 283 ms_handle_reset con 0x5637d87ce800 session 0x5637d78d2540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 283 ms_handle_reset con 0x5637d5775c00 session 0x5637d8618380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:41.956112+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144629760 unmapped: 42745856 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 283 handle_osd_map epochs [284,284], i have 283, src has [1,284]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 ms_handle_reset con 0x5637d80b7000 session 0x5637d7cfea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 ms_handle_reset con 0x5637d8434000 session 0x5637d8552000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:42.956335+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2059568 data_alloc: 218103808 data_used: 6879954
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 heartbeat osd_stat(store_statfs(0x4f8c0c000/0x0/0x4ffc00000, data 0x1f4a8e1/0x20e0000, compress 0x0/0x0/0x0, omap 0x40141, meta 0x4ecfebf), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144465920 unmapped: 42909696 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 ms_handle_reset con 0x5637d87cec00 session 0x5637d5228000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 284 handle_osd_map epochs [284,285], i have 285, src has [1,285]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 ms_handle_reset con 0x5637d87cec00 session 0x5637d63defc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 ms_handle_reset con 0x5637d5775c00 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:43.956517+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144564224 unmapped: 42811392 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:44.956665+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 ms_handle_reset con 0x5637d80b7000 session 0x5637d8d0d340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144564224 unmapped: 42811392 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.734203339s of 10.090179443s, submitted: 161
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 ms_handle_reset con 0x5637d87ce800 session 0x5637d8320a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 heartbeat osd_stat(store_statfs(0x4f8c04000/0x0/0x4ffc00000, data 0x1f4db9e/0x20e4000, compress 0x0/0x0/0x0, omap 0x409f4, meta 0x4ecf60c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:45.956802+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 286 ms_handle_reset con 0x5637d87cf000 session 0x5637d6360540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144482304 unmapped: 42893312 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 287 ms_handle_reset con 0x5637d81e1800 session 0x5637d6332700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 287 ms_handle_reset con 0x5637d8434000 session 0x5637d63dfc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:46.956961+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 42885120 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:47.957107+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2070723 data_alloc: 218103808 data_used: 6879938
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 42885120 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:48.957217+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 288 ms_handle_reset con 0x5637d5775c00 session 0x5637d86188c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144490496 unmapped: 42885120 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:49.957351+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 289 ms_handle_reset con 0x5637d87cf000 session 0x5637d648cc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 42868736 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 289 heartbeat osd_stat(store_statfs(0x4f8bf8000/0x0/0x4ffc00000, data 0x1f54b0a/0x20f0000, compress 0x0/0x0/0x0, omap 0x41009, meta 0x4eceff7), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:50.957454+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144506880 unmapped: 42868736 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d87ce800 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ce800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d87ce800 session 0x5637d8d0d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:51.957574+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d5775c00 session 0x5637d636ea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d81e1800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d81e1800 session 0x5637d592a540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d8434000 session 0x5637d79c8e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d87cf000 session 0x5637d7e8f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144400384 unmapped: 42975232 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:52.957688+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2183064 data_alloc: 234881024 data_used: 11703490
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144334848 unmapped: 43040768 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 ms_handle_reset con 0x5637d5775c00 session 0x5637d592ac40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 290 handle_osd_map epochs [290,291], i have 291, src has [1,291]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:53.957791+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 291 ms_handle_reset con 0x5637d7c73000 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144351232 unmapped: 43024384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:54.957898+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 144351232 unmapped: 43024384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 291 heartbeat osd_stat(store_statfs(0x4f80fb000/0x0/0x4ffc00000, data 0x2a4d3ae/0x2bed000, compress 0x0/0x0/0x0, omap 0x40c8a, meta 0x4ecf376), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:55.958028+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.597621918s of 10.772197723s, submitted: 122
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 292 ms_handle_reset con 0x5637d7c73800 session 0x5637d7e8e380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 292 ms_handle_reset con 0x5637d87cf000 session 0x5637d79c9880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145408000 unmapped: 41967616 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 293 ms_handle_reset con 0x5637d7c72400 session 0x5637d63336c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:56.958347+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145408000 unmapped: 41967616 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 ms_handle_reset con 0x5637d7c73800 session 0x5637d648ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:57.958688+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 ms_handle_reset con 0x5637d87cf000 session 0x5637d78d2a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 ms_handle_reset con 0x5637d7c73000 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2200850 data_alloc: 234881024 data_used: 12638548
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 145842176 unmapped: 41533440 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f80f6000/0x0/0x4ffc00000, data 0x2a526ba/0x2bf4000, compress 0x0/0x0/0x0, omap 0x3ff01, meta 0x4ed00ff), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:58.958806+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 ms_handle_reset con 0x5637d8613c00 session 0x5637d5e4a8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151330816 unmapped: 36044800 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 heartbeat osd_stat(store_statfs(0x4f80f5000/0x0/0x4ffc00000, data 0x2a526ca/0x2bf5000, compress 0x0/0x0/0x0, omap 0x401e2, meta 0x4ecfe1e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:59.958953+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151363584 unmapped: 36012032 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:00.959108+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 ms_handle_reset con 0x5637d8612800 session 0x5637d5e4ba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151625728 unmapped: 35749888 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 295 heartbeat osd_stat(store_statfs(0x4f80f6000/0x0/0x4ffc00000, data 0x2a5272c/0x2bf6000, compress 0x0/0x0/0x0, omap 0x401e2, meta 0x4ecfe1e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 295 ms_handle_reset con 0x5637d7c73000 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:01.959250+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 151633920 unmapped: 35741696 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 296 ms_handle_reset con 0x5637d7c73800 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 296 ms_handle_reset con 0x5637d8612400 session 0x5637d5263500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:02.959427+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2267940 data_alloc: 234881024 data_used: 21903037
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153010176 unmapped: 34365440 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:03.959595+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160268288 unmapped: 27107328 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:04.959766+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154705920 unmapped: 32669696 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 ms_handle_reset con 0x5637d8613c00 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87cf000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:05.959907+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.680281162s of 10.072970390s, submitted: 240
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156696576 unmapped: 30679040 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f743e000/0x0/0x4ffc00000, data 0x3704ab4/0x38ab000, compress 0x0/0x0/0x0, omap 0x40a0f, meta 0x4ecf5f1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 ms_handle_reset con 0x5637d87cf000 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:06.960077+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156737536 unmapped: 30638080 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 ms_handle_reset con 0x5637d7c73000 session 0x5637d5e4a8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:07.960213+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f7441000/0x0/0x4ffc00000, data 0x3704ab4/0x38ab000, compress 0x0/0x0/0x0, omap 0x40c6a, meta 0x4ecf396), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2345252 data_alloc: 234881024 data_used: 22202029
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156737536 unmapped: 30638080 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 heartbeat osd_stat(store_statfs(0x4f7441000/0x0/0x4ffc00000, data 0x3704ab4/0x38ab000, compress 0x0/0x0/0x0, omap 0x405c7, meta 0x4ecfa39), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:08.960336+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 28901376 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:09.960526+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158474240 unmapped: 28901376 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:10.960730+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158523392 unmapped: 28852224 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 ms_handle_reset con 0x5637d8612400 session 0x5637d592a700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 298 ms_handle_reset con 0x5637d8612000 session 0x5637d57acc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:11.960924+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160014336 unmapped: 27361280 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 ms_handle_reset con 0x5637d8613c00 session 0x5637d7cff500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 ms_handle_reset con 0x5637d80b7000 session 0x5637d5863880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 ms_handle_reset con 0x5637d7c73800 session 0x5637d5863180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 ms_handle_reset con 0x5637d8613c00 session 0x5637d5f421c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:12.961143+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2381453 data_alloc: 234881024 data_used: 22439597
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159522816 unmapped: 27852800 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 heartbeat osd_stat(store_statfs(0x4f7028000/0x0/0x4ffc00000, data 0x3b18234/0x3cc2000, compress 0x0/0x0/0x0, omap 0x4064d, meta 0x4ecf9b3), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:13.961294+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159539200 unmapped: 27836416 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637d8612c00 session 0x5637d7e8f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637d8613000 session 0x5637d8062c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:14.961474+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159555584 unmapped: 27820032 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637dc342400 session 0x5637d6360700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:15.961958+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159555584 unmapped: 27820032 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:16.962175+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637d8612c00 session 0x5637d78d2380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159555584 unmapped: 27820032 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637d8613000 session 0x5637d636efc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.606744766s of 11.268082619s, submitted: 160
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 heartbeat osd_stat(store_statfs(0x4f7023000/0x0/0x4ffc00000, data 0x3b19e24/0x3cc5000, compress 0x0/0x0/0x0, omap 0x3fdcd, meta 0x4ed0233), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 ms_handle_reset con 0x5637d8613c00 session 0x5637d7cfec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 300 handle_osd_map epochs [301,301], i have 301, src has [1,301]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 301 ms_handle_reset con 0x5637dc342800 session 0x5637d8320e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 301 ms_handle_reset con 0x5637d7c73800 session 0x5637d8062380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:17.962350+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2387397 data_alloc: 234881024 data_used: 22440210
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 27680768 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:18.962636+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 27680768 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 302 ms_handle_reset con 0x5637d8612c00 session 0x5637d7e8ec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 302 ms_handle_reset con 0x5637d8613000 session 0x5637d651ea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:19.962789+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 27680768 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 302 heartbeat osd_stat(store_statfs(0x4f6ffd000/0x0/0x4ffc00000, data 0x3b3f5cc/0x3ced000, compress 0x0/0x0/0x0, omap 0x400b6, meta 0x4ecff4a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:20.962915+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159694848 unmapped: 27680768 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:21.963139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159711232 unmapped: 27664384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:22.963315+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2392929 data_alloc: 234881024 data_used: 22440226
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159711232 unmapped: 27664384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 304 ms_handle_reset con 0x5637d8613c00 session 0x5637d79c9a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:23.963476+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159711232 unmapped: 27664384 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 304 heartbeat osd_stat(store_statfs(0x4f6ff6000/0x0/0x4ffc00000, data 0x3b42c63/0x3cf2000, compress 0x0/0x0/0x0, omap 0x4039f, meta 0x4ecfc61), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 304 ms_handle_reset con 0x5637dc342800 session 0x5637d55ba8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:24.963642+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160145408 unmapped: 27230208 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:25.963872+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160276480 unmapped: 27099136 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:26.964005+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160276480 unmapped: 27099136 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.672414780s of 10.054655075s, submitted: 64
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:27.964218+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401107 data_alloc: 234881024 data_used: 22528985
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 27090944 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 305 ms_handle_reset con 0x5637d8574400 session 0x5637d592cc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:28.964380+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160284672 unmapped: 27090944 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:29.964546+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 306 ms_handle_reset con 0x5637d8574400 session 0x5637d55bbc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 306 heartbeat osd_stat(store_statfs(0x4f6fd4000/0x0/0x4ffc00000, data 0x3b6880d/0x3d18000, compress 0x0/0x0/0x0, omap 0x40468, meta 0x4ecfb98), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159662080 unmapped: 27713536 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:30.964707+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 27705344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 308 ms_handle_reset con 0x5637d8612c00 session 0x5637d8321500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 308 ms_handle_reset con 0x5637d8613000 session 0x5637d57916c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:31.964875+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 27705344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:32.965020+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 308 heartbeat osd_stat(store_statfs(0x4f6fc4000/0x0/0x4ffc00000, data 0x3b70cd1/0x3d24000, compress 0x0/0x0/0x0, omap 0x40de4, meta 0x4ecf21c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409549 data_alloc: 234881024 data_used: 22528887
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 27705344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:33.965208+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 27705344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:34.965369+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159670272 unmapped: 27705344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:35.965542+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 308 ms_handle_reset con 0x5637d87ae800 session 0x5637d78d2c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159801344 unmapped: 27574272 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 309 ms_handle_reset con 0x5637d87aec00 session 0x5637d592c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:36.965699+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160137216 unmapped: 27238400 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.756746292s of 10.165657997s, submitted: 126
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637d87af000 session 0x5637d6097500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637d8574400 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:37.965839+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637dc342800 session 0x5637d8d0c1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637d8612c00 session 0x5637d8552000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2438216 data_alloc: 234881024 data_used: 23503931
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160407552 unmapped: 26968064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f6fb3000/0x0/0x4ffc00000, data 0x3b79a8d/0x3d33000, compress 0x0/0x0/0x0, omap 0x41257, meta 0x4eceda9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:38.966188+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160505856 unmapped: 26869760 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f6fb3000/0x0/0x4ffc00000, data 0x3b79a8d/0x3d33000, compress 0x0/0x0/0x0, omap 0x41257, meta 0x4eceda9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:39.966421+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637d8612400 session 0x5637d5f65c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 ms_handle_reset con 0x5637d8612400 session 0x5637d5262c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160505856 unmapped: 26869760 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 heartbeat osd_stat(store_statfs(0x4f6fb8000/0x0/0x4ffc00000, data 0x3b79aef/0x3d34000, compress 0x0/0x0/0x0, omap 0x4144f, meta 0x4ecebb1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 311 ms_handle_reset con 0x5637d8574400 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:40.966554+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 311 ms_handle_reset con 0x5637dc342800 session 0x5637d78d2c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160514048 unmapped: 26861568 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 heartbeat osd_stat(store_statfs(0x4f6fb2000/0x0/0x4ffc00000, data 0x3b7b6ed/0x3d38000, compress 0x0/0x0/0x0, omap 0x416b2, meta 0x4ece94e), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 ms_handle_reset con 0x5637d8612c00 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 ms_handle_reset con 0x5637d87af000 session 0x5637d592bdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 ms_handle_reset con 0x5637d8613000 session 0x5637d8d0cc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 ms_handle_reset con 0x5637d87ae800 session 0x5637d79c9880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:41.966743+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160530432 unmapped: 26845184 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 313 ms_handle_reset con 0x5637d8574400 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:42.966959+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 313 ms_handle_reset con 0x5637d8612400 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2457494 data_alloc: 234881024 data_used: 24012859
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160677888 unmapped: 26697728 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:43.967172+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160677888 unmapped: 26697728 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 313 heartbeat osd_stat(store_statfs(0x4f6f9e000/0x0/0x4ffc00000, data 0x3b8aeeb/0x3d4c000, compress 0x0/0x0/0x0, omap 0x41d92, meta 0x4ece26e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 313 ms_handle_reset con 0x5637d87af000 session 0x5637d5862e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 313 handle_osd_map epochs [314,314], i have 313, src has [1,314]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:44.967934+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 314 ms_handle_reset con 0x5637d8612400 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 314 ms_handle_reset con 0x5637d8613000 session 0x5637d592c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160546816 unmapped: 26828800 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 314 handle_osd_map epochs [314,315], i have 314, src has [1,315]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 315 ms_handle_reset con 0x5637d87ae800 session 0x5637d5791340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:45.968081+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 315 ms_handle_reset con 0x5637d8612000 session 0x5637d78d3dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 315 ms_handle_reset con 0x5637d8612c00 session 0x5637d7e8f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 315 ms_handle_reset con 0x5637dc342800 session 0x5637d7e8fa40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160612352 unmapped: 26763264 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 315 handle_osd_map epochs [316,316], i have 315, src has [1,316]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 316 ms_handle_reset con 0x5637d8613800 session 0x5637d6333180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 316 ms_handle_reset con 0x5637d8574400 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:46.968324+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 316 heartbeat osd_stat(store_statfs(0x4f6f95000/0x0/0x4ffc00000, data 0x3b8ed62/0x3d55000, compress 0x0/0x0/0x0, omap 0x42104, meta 0x4ecdefc), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160620544 unmapped: 26755072 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.854267120s of 10.123731613s, submitted: 120
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 316 handle_osd_map epochs [317,317], i have 316, src has [1,317]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 317 ms_handle_reset con 0x5637d8612400 session 0x5637d6345a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 317 ms_handle_reset con 0x5637d8612000 session 0x5637d7e8f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 317 ms_handle_reset con 0x5637d8574400 session 0x5637d5263500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:47.968483+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 317 ms_handle_reset con 0x5637d8613000 session 0x5637d57ac540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2474693 data_alloc: 234881024 data_used: 24013542
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160653312 unmapped: 26722304 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 317 handle_osd_map epochs [317,318], i have 317, src has [1,318]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:48.968608+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d8612000 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d8612400 session 0x5637d8062380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d8613800 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160677888 unmapped: 26697728 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f6f93000/0x0/0x4ffc00000, data 0x3b93777/0x3d57000, compress 0x0/0x0/0x0, omap 0x42c1f, meta 0x4ecd3e1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:49.968820+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d8613c00 session 0x5637d5262e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160694272 unmapped: 26681344 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:50.969011+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 heartbeat osd_stat(store_statfs(0x4f6f93000/0x0/0x4ffc00000, data 0x3b93777/0x3d57000, compress 0x0/0x0/0x0, omap 0x42c1f, meta 0x4ecd3e1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d7c72400 session 0x5637d6344c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d8612000 session 0x5637d5e4a1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 ms_handle_reset con 0x5637d5775c00 session 0x5637d7e8fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160727040 unmapped: 26648576 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 318 handle_osd_map epochs [319,319], i have 318, src has [1,319]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 319 ms_handle_reset con 0x5637d8613000 session 0x5637d8010e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 319 ms_handle_reset con 0x5637d8612400 session 0x5637d651f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:51.969134+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 33169408 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 319 ms_handle_reset con 0x5637dc342800 session 0x5637d8552000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 320 ms_handle_reset con 0x5637d8574400 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:52.969301+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2331173 data_alloc: 234881024 data_used: 13635732
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 33169408 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:53.969410+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154206208 unmapped: 33169408 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 320 heartbeat osd_stat(store_statfs(0x4f7e82000/0x0/0x4ffc00000, data 0x2c9ef67/0x2e66000, compress 0x0/0x0/0x0, omap 0x4281a, meta 0x4ecd7e6), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 321 ms_handle_reset con 0x5637d8613000 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5775c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:54.969531+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 321 ms_handle_reset con 0x5637d5775c00 session 0x5637d89bd180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 33144832 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:55.969701+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 33144832 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 321 handle_osd_map epochs [321,322], i have 321, src has [1,322]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 322 ms_handle_reset con 0x5637d8574400 session 0x5637d5262c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:56.969842+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154279936 unmapped: 33095680 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.752992630s of 10.030400276s, submitted: 217
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 322 handle_osd_map epochs [322,323], i have 322, src has [1,323]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:57.969965+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 323 ms_handle_reset con 0x5637d8612400 session 0x5637d5e4a1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2336250 data_alloc: 234881024 data_used: 13627426
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 33112064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:58.970116+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 323 handle_osd_map epochs [324,324], i have 323, src has [1,324]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154263552 unmapped: 33112064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:59.970272+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 324 handle_osd_map epochs [325,325], i have 324, src has [1,325]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 325 ms_handle_reset con 0x5637d8613000 session 0x5637d8d0c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 325 ms_handle_reset con 0x5637d7c72400 session 0x5637d63eaa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 325 ms_handle_reset con 0x5637dc342800 session 0x5637d63defc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 325 heartbeat osd_stat(store_statfs(0x4f7e73000/0x0/0x4ffc00000, data 0x2cac0a8/0x2e75000, compress 0x0/0x0/0x0, omap 0x42f8b, meta 0x4ecd075), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 33062912 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:00.970418+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154312704 unmapped: 33062912 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 326 ms_handle_reset con 0x5637dc342800 session 0x5637d80621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 326 ms_handle_reset con 0x5637d7c72400 session 0x5637d80636c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:01.970555+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155582464 unmapped: 31793152 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:02.970691+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 327 ms_handle_reset con 0x5637d8574400 session 0x5637d78d2c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352601 data_alloc: 234881024 data_used: 15314880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155582464 unmapped: 31793152 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:03.970846+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 327 ms_handle_reset con 0x5637d8613000 session 0x5637d57aca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 327 ms_handle_reset con 0x5637d8612400 session 0x5637d8320fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 31784960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:04.971098+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 31784960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:05.971272+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 327 heartbeat osd_stat(store_statfs(0x4f7e70000/0x0/0x4ffc00000, data 0x2cb161d/0x2e7c000, compress 0x0/0x0/0x0, omap 0x4389a, meta 0x4ecc766), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 31784960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:06.971482+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155590656 unmapped: 31784960 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.559655190s of 10.157499313s, submitted: 186
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:07.971605+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 328 ms_handle_reset con 0x5637d7c72400 session 0x5637d78d2e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2359541 data_alloc: 234881024 data_used: 15314880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 31735808 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:08.971841+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155639808 unmapped: 31735808 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:09.972064+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 328 ms_handle_reset con 0x5637d8574400 session 0x5637d6361500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155648000 unmapped: 31727616 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 328 ms_handle_reset con 0x5637dc342800 session 0x5637d5862e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:10.972196+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 329 heartbeat osd_stat(store_statfs(0x4f7e6a000/0x0/0x4ffc00000, data 0x2cb31a8/0x2e82000, compress 0x0/0x0/0x0, omap 0x43a97, meta 0x4ecc569), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 329 ms_handle_reset con 0x5637d8612000 session 0x5637d7e8fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155648000 unmapped: 31727616 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:11.972310+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 329 handle_osd_map epochs [330,330], i have 329, src has [1,330]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 ms_handle_reset con 0x5637d8613c00 session 0x5637d63ea380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 ms_handle_reset con 0x5637d8613c00 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 ms_handle_reset con 0x5637d8613000 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155672576 unmapped: 31703040 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 heartbeat osd_stat(store_statfs(0x4f7e5e000/0x0/0x4ffc00000, data 0x2cb6e37/0x2e8a000, compress 0x0/0x0/0x0, omap 0x43a97, meta 0x4ecc569), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 ms_handle_reset con 0x5637d8574400 session 0x5637d6361180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:12.972439+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 330 handle_osd_map epochs [331,331], i have 330, src has [1,331]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 331 ms_handle_reset con 0x5637d8612000 session 0x5637d5262a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2378332 data_alloc: 234881024 data_used: 15315677
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155738112 unmapped: 31637504 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:13.972611+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 331 handle_osd_map epochs [331,332], i have 331, src has [1,332]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 ms_handle_reset con 0x5637d7c72400 session 0x5637d5863880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 31629312 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 ms_handle_reset con 0x5637dc342800 session 0x5637d57acfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 ms_handle_reset con 0x5637d7c72400 session 0x5637d5790a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 ms_handle_reset con 0x5637d8574400 session 0x5637d8011a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:14.972782+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 ms_handle_reset con 0x5637d8612000 session 0x5637d57ac380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155746304 unmapped: 31629312 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:15.972968+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 heartbeat osd_stat(store_statfs(0x4f7e5a000/0x0/0x4ffc00000, data 0x2cba0be/0x2e8e000, compress 0x0/0x0/0x0, omap 0x43cfa, meta 0x4ecc306), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 ms_handle_reset con 0x5637d8613000 session 0x5637d592bdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 ms_handle_reset con 0x5637d7c72400 session 0x5637d8062540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 ms_handle_reset con 0x5637d8613000 session 0x5637d5f65340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 ms_handle_reset con 0x5637d8574400 session 0x5637d86196c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155787264 unmapped: 31588352 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:16.973147+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 ms_handle_reset con 0x5637d8612000 session 0x5637d55bbc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 31580160 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc342800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.032488823s of 10.013016701s, submitted: 154
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:17.973313+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2383401 data_alloc: 234881024 data_used: 15316176
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155803648 unmapped: 31571968 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 333 handle_osd_map epochs [333,334], i have 333, src has [1,334]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 334 ms_handle_reset con 0x5637dc342800 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:18.973451+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155811840 unmapped: 31563776 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 334 ms_handle_reset con 0x5637d7c72400 session 0x5637d8d0ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:19.973615+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155820032 unmapped: 31555584 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:20.973758+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155820032 unmapped: 31555584 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 334 handle_osd_map epochs [334,335], i have 334, src has [1,335]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:21.973907+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f7e58000/0x0/0x4ffc00000, data 0x2cbd94d/0x2e92000, compress 0x0/0x0/0x0, omap 0x4373d, meta 0x4ecc8c3), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 30507008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:22.974087+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2388121 data_alloc: 234881024 data_used: 15429742
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 30507008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:23.974267+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 335 heartbeat osd_stat(store_statfs(0x4f7e55000/0x0/0x4ffc00000, data 0x2cbf420/0x2e95000, compress 0x0/0x0/0x0, omap 0x43806, meta 0x4ecc7fa), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 30507008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:24.974456+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 30507008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:25.974604+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156868608 unmapped: 30507008 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:26.974757+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 heartbeat osd_stat(store_statfs(0x4f7e55000/0x0/0x4ffc00000, data 0x2cbf420/0x2e95000, compress 0x0/0x0/0x0, omap 0x43806, meta 0x4ecc7fa), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156876800 unmapped: 30498816 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:27.974903+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.139104843s of 10.117584229s, submitted: 60
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 ms_handle_reset con 0x5637d8574400 session 0x5637d57376c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 ms_handle_reset con 0x5637d8612000 session 0x5637d7e8ee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2390175 data_alloc: 234881024 data_used: 15429742
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156876800 unmapped: 30498816 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:28.975105+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156876800 unmapped: 30498816 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 ms_handle_reset con 0x5637d7c73000 session 0x5637d5229dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8517400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 ms_handle_reset con 0x5637d8517400 session 0x5637d7e8e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:29.975255+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 336 handle_osd_map epochs [336,337], i have 336, src has [1,337]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 337 ms_handle_reset con 0x5637d8613000 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 337 heartbeat osd_stat(store_statfs(0x4f7e54000/0x0/0x4ffc00000, data 0x2cc0e9f/0x2e98000, compress 0x0/0x0/0x0, omap 0x43bfb, meta 0x4ecc405), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157483008 unmapped: 29892608 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 337 ms_handle_reset con 0x5637d7c72400 session 0x5637d592c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:30.975371+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 337 handle_osd_map epochs [338,338], i have 337, src has [1,338]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 ms_handle_reset con 0x5637d7c73000 session 0x5637d78d3dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157491200 unmapped: 29884416 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:31.975560+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 ms_handle_reset con 0x5637d8574400 session 0x5637d592ac40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157556736 unmapped: 29818880 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 ms_handle_reset con 0x5637d8612000 session 0x5637d57acc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:32.975700+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2410439 data_alloc: 234881024 data_used: 16966254
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 29810688 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 ms_handle_reset con 0x5637d7c72400 session 0x5637d8062700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:33.975852+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 29810688 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:34.976123+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157564928 unmapped: 29810688 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:35.976255+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 heartbeat osd_stat(store_statfs(0x4f7e4b000/0x0/0x4ffc00000, data 0x2d3369d/0x2ea1000, compress 0x0/0x0/0x0, omap 0x44203, meta 0x4ecbdfd), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 339 ms_handle_reset con 0x5637d7c73000 session 0x5637d6096700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157573120 unmapped: 29802496 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:36.976536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 339 handle_osd_map epochs [339,340], i have 339, src has [1,340]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 heartbeat osd_stat(store_statfs(0x4f7e46000/0x0/0x4ffc00000, data 0x2d35255/0x2ea4000, compress 0x0/0x0/0x0, omap 0x44203, meta 0x4ecbdfd), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 ms_handle_reset con 0x5637d8574400 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 ms_handle_reset con 0x5637d8613000 session 0x5637d592d340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159096832 unmapped: 28278784 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:37.976787+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 ms_handle_reset con 0x5637d576ec00 session 0x5637d592a700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2418132 data_alloc: 234881024 data_used: 17634503
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 28393472 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:38.976991+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.613089561s of 11.027629852s, submitted: 52
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 28393472 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:39.977231+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 28393472 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:40.977412+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 ms_handle_reset con 0x5637d7c72400 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158982144 unmapped: 28393472 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 340 handle_osd_map epochs [340,341], i have 340, src has [1,341]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f7e43000/0x0/0x4ffc00000, data 0x2d36cd4/0x2ea7000, compress 0x0/0x0/0x0, omap 0x4452f, meta 0x4ecbad1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d576ec00 session 0x5637d7e8e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:41.977550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f7e40000/0x0/0x4ffc00000, data 0x2d388c4/0x2eaa000, compress 0x0/0x0/0x0, omap 0x445b5, meta 0x4ecba4b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158998528 unmapped: 28377088 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:42.977757+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d7c73000 session 0x5637d78d2c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2419638 data_alloc: 234881024 data_used: 17634585
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd1f000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d5eca800 session 0x5637d5f421c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 158998528 unmapped: 28377088 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637dbd1f000 session 0x5637d5262e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:43.977948+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d576ec00 session 0x5637d8320700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d5eca800 session 0x5637d6332540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d7c72400 session 0x5637d6344c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d7c73000 session 0x5637d79c9a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 28164096 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:44.978085+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159211520 unmapped: 28164096 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:45.978249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637dbd21400 session 0x5637d78d2e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 heartbeat osd_stat(store_statfs(0x4f7b0f000/0x0/0x4ffc00000, data 0x306b8c4/0x31dd000, compress 0x0/0x0/0x0, omap 0x447cd, meta 0x4ecb833), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d8613c00 session 0x5637d6361a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d87ae800 session 0x5637d648c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 28123136 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 ms_handle_reset con 0x5637d576ec00 session 0x5637d5263500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:46.978378+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 heartbeat osd_stat(store_statfs(0x4f7b0f000/0x0/0x4ffc00000, data 0x306b8c4/0x31dd000, compress 0x0/0x0/0x0, omap 0x447cd, meta 0x4ecb833), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 ms_handle_reset con 0x5637dbd21400 session 0x5637d8618fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 ms_handle_reset con 0x5637d5eca800 session 0x5637d6361dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 ms_handle_reset con 0x5637d7c72400 session 0x5637d57aca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 ms_handle_reset con 0x5637d576ec00 session 0x5637d648ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 28090368 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:47.978534+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 343 ms_handle_reset con 0x5637d8613c00 session 0x5637d5263a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 343 ms_handle_reset con 0x5637d87ae800 session 0x5637d55ba8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436414 data_alloc: 234881024 data_used: 17884457
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159293440 unmapped: 28082176 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 343 ms_handle_reset con 0x5637dbd21400 session 0x5637d592dc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:48.978676+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 343 ms_handle_reset con 0x5637dbd21400 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.605700493s of 10.378322601s, submitted: 122
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159301632 unmapped: 28073984 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:49.978863+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d576ec00 session 0x5637d5f43a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d7c72400 session 0x5637d89bd180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d8613c00 session 0x5637d5262a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d87ae800 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159367168 unmapped: 28008448 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d87ae800 session 0x5637d63321c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:50.979016+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159367168 unmapped: 28008448 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 ms_handle_reset con 0x5637d576ec00 session 0x5637d592c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:51.979270+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 344 handle_osd_map epochs [345,345], i have 344, src has [1,345]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159375360 unmapped: 28000256 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 345 heartbeat osd_stat(store_statfs(0x4f7e37000/0x0/0x4ffc00000, data 0x2d3f67d/0x2eb3000, compress 0x0/0x0/0x0, omap 0x45613, meta 0x4eca9ed), peers [0,2] op hist [0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:52.979448+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 345 ms_handle_reset con 0x5637dbd21400 session 0x5637d63defc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2438956 data_alloc: 234881024 data_used: 17884830
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159375360 unmapped: 28000256 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:53.979637+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 345 heartbeat osd_stat(store_statfs(0x4f7e37000/0x0/0x4ffc00000, data 0x2d3f67d/0x2eb3000, compress 0x0/0x0/0x0, omap 0x45613, meta 0x4eca9ed), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159375360 unmapped: 28000256 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 346 ms_handle_reset con 0x5637d7c72400 session 0x5637d8320fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 346 ms_handle_reset con 0x5637d8613c00 session 0x5637d5263500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:54.979763+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 27992064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:55.979925+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159383552 unmapped: 27992064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:56.980169+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:03 compute-0 ceph-mgr[75360]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:36:03 compute-0 ceph-437a9f04-06b7-56e3-8a4b-f52a1199dd32-mgr-compute-0-gsxkyu[75356]: 2025-12-13T04:36:03.172+0000 7f4cb924f640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 346 handle_osd_map epochs [347,347], i have 347, src has [1,347]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 347 ms_handle_reset con 0x5637d576ec00 session 0x5637d648c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c72400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 347 ms_handle_reset con 0x5637d7c73000 session 0x5637d6333c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159449088 unmapped: 27926528 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 347 ms_handle_reset con 0x5637d8574000 session 0x5637d636f6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 347 ms_handle_reset con 0x5637d57d0c00 session 0x5637d6345dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:57.980323+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 347 handle_osd_map epochs [348,348], i have 347, src has [1,348]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d7c72400 session 0x5637d5262e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2441206 data_alloc: 234881024 data_used: 17635136
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f7e30000/0x0/0x4ffc00000, data 0x2cd3e6d/0x2eb8000, compress 0x0/0x0/0x0, omap 0x45c01, meta 0x4eca3ff), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d8613c00 session 0x5637d6333180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159391744 unmapped: 27983872 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d576ec00 session 0x5637d5f42c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:58.980512+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d57d0c00 session 0x5637d5e4a8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d7c73000 session 0x5637d5790a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.376316547s of 10.167181969s, submitted: 146
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 ms_handle_reset con 0x5637d8574000 session 0x5637d5863180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159416320 unmapped: 27959296 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:59.980703+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159416320 unmapped: 27959296 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:00.980948+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 heartbeat osd_stat(store_statfs(0x4f7e54000/0x0/0x4ffc00000, data 0x2cb19cb/0x2e96000, compress 0x0/0x0/0x0, omap 0x45cb4, meta 0x4eca34c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159416320 unmapped: 27959296 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:01.981152+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 348 handle_osd_map epochs [348,349], i have 348, src has [1,349]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 349 ms_handle_reset con 0x5637d8574000 session 0x5637d55ba000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 27951104 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:02.981323+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2436603 data_alloc: 234881024 data_used: 17530526
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 27951104 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:03.981569+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 159424512 unmapped: 27951104 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 349 ms_handle_reset con 0x5637d576ec00 session 0x5637d80636c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:04.981722+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 349 ms_handle_reset con 0x5637d7c73000 session 0x5637d8552000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 34226176 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 ms_handle_reset con 0x5637d57d0c00 session 0x5637d89bd880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:05.981861+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153165824 unmapped: 34209792 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:06.982102+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 ms_handle_reset con 0x5637d8613c00 session 0x5637d7cff500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f9124000/0x0/0x4ffc00000, data 0x19de068/0x1bc6000, compress 0x0/0x0/0x0, omap 0x46453, meta 0x4ec9bad), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153174016 unmapped: 34201600 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:07.982485+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2269065 data_alloc: 218103808 data_used: 7415015
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 heartbeat osd_stat(store_statfs(0x4f9127000/0x0/0x4ffc00000, data 0x19de006/0x1bc5000, compress 0x0/0x0/0x0, omap 0x465a2, meta 0x4ec9a5e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153174016 unmapped: 34201600 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:08.982650+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 handle_osd_map epochs [350,351], i have 350, src has [1,351]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 350 handle_osd_map epochs [351,351], i have 351, src has [1,351]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 33153024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.215568542s of 10.288732529s, submitted: 44
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:09.982863+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 ms_handle_reset con 0x5637d8613c00 session 0x5637d80621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 ms_handle_reset con 0x5637d576ec00 session 0x5637d6097180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154222592 unmapped: 33153024 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 heartbeat osd_stat(store_statfs(0x4f7f82000/0x0/0x4ffc00000, data 0x19dfbf6/0x1bc8000, compress 0x0/0x0/0x0, omap 0x4666b, meta 0x6069995), peers [0,2] op hist [0,0,0,2])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:10.983099+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154230784 unmapped: 33144832 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:11.983280+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 ms_handle_reset con 0x5637d57d0c00 session 0x5637d78d3180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 ms_handle_reset con 0x5637d7c73000 session 0x5637d63336c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155262976 unmapped: 32112640 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:12.983469+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2273446 data_alloc: 218103808 data_used: 7415015
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8574000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 ms_handle_reset con 0x5637d8574000 session 0x5637d6345a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155287552 unmapped: 32088064 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:13.983624+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155303936 unmapped: 32071680 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:14.983858+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 351 handle_osd_map epochs [351,352], i have 352, src has [1,352]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f7f82000/0x0/0x4ffc00000, data 0x19dfc68/0x1bca000, compress 0x0/0x0/0x0, omap 0x46a8d, meta 0x6069573), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 352 ms_handle_reset con 0x5637d576ec00 session 0x5637d7cfee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 352 heartbeat osd_stat(store_statfs(0x4f7f82000/0x0/0x4ffc00000, data 0x19dfc68/0x1bca000, compress 0x0/0x0/0x0, omap 0x46a8d, meta 0x6069573), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155344896 unmapped: 32030720 heap: 187375616 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 352 ms_handle_reset con 0x5637d57d0c00 session 0x5637d55ba000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:15.983991+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d7c73000 session 0x5637d5262fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d8613c00 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d87ae800 session 0x5637d89bd180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d87ae800 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d576ec00 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 169533440 unmapped: 26771456 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d57d0c00 session 0x5637d55ba8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:16.984124+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 ms_handle_reset con 0x5637d7c73000 session 0x5637d8063500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d8613c00 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d576ec00 session 0x5637d7e8fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d57d0c00 session 0x5637d651f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d7c73000 session 0x5637d5262380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d87ae800 session 0x5637d78d6e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637dbd21400 session 0x5637d78d2000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 40189952 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:17.984316+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637dbd21400 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2377816 data_alloc: 218103808 data_used: 7415015
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 40189952 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:18.984548+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 heartbeat osd_stat(store_statfs(0x4f7093000/0x0/0x4ffc00000, data 0x28c5f0d/0x2ab5000, compress 0x0/0x0/0x0, omap 0x476bc, meta 0x6068944), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d576ec00 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 40189952 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:19.984797+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156114944 unmapped: 40189952 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:20.984970+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.750973701s of 11.233329773s, submitted: 139
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d57d0c00 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 ms_handle_reset con 0x5637d7c73000 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156262400 unmapped: 40042496 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:21.985126+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 354 handle_osd_map epochs [354,355], i have 354, src has [1,355]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d58cfc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 355 ms_handle_reset con 0x5637d58cfc00 session 0x5637d57908c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157966336 unmapped: 38338560 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:22.985234+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2473852 data_alloc: 234881024 data_used: 21719385
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 355 ms_handle_reset con 0x5637d87ae800 session 0x5637d8062540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 355 ms_handle_reset con 0x5637d8b4a400 session 0x5637d5228000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161292288 unmapped: 35012608 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:23.985384+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 355 handle_osd_map epochs [356,356], i have 355, src has [1,356]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 heartbeat osd_stat(store_statfs(0x4f706b000/0x0/0x4ffc00000, data 0x28eba21/0x2adf000, compress 0x0/0x0/0x0, omap 0x47aa9, meta 0x6068557), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d576ec00 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d57d0c00 session 0x5637d5f64000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d7c73000 session 0x5637d8552380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d576ec00 session 0x5637d592da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d57d0c00 session 0x5637d55bafc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154083328 unmapped: 42221568 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:24.985565+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d87ae800 session 0x5637d8010700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d8b4a400 session 0x5637d6361a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc1c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637dbd21400 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637dbfc1c00 session 0x5637d7e8e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 43163648 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:25.985681+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d57d0c00 session 0x5637d592d340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 ms_handle_reset con 0x5637d87ae800 session 0x5637d592a700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 357 ms_handle_reset con 0x5637d576ec00 session 0x5637d85536c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 357 ms_handle_reset con 0x5637dbfc0000 session 0x5637d636e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153174016 unmapped: 43130880 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:26.985813+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 357 ms_handle_reset con 0x5637dbfc0000 session 0x5637d8552c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153182208 unmapped: 43122688 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:27.986007+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 357 handle_osd_map epochs [357,358], i have 357, src has [1,358]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d8b4a400 session 0x5637d5263c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d576ec00 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d87ae800 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d57d0c00 session 0x5637d5566700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2330327 data_alloc: 218103808 data_used: 7423074
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:28.986241+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 43073536 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d57d0c00 session 0x5637d5567500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ae800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 ms_handle_reset con 0x5637d576ec00 session 0x5637d55bafc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:29.986475+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153247744 unmapped: 43057152 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 heartbeat osd_stat(store_statfs(0x4f7f65000/0x0/0x4ffc00000, data 0x19ebdea/0x1be5000, compress 0x0/0x0/0x0, omap 0x48eff, meta 0x6067101), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 358 handle_osd_map epochs [359,359], i have 358, src has [1,359]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 ms_handle_reset con 0x5637d87ae800 session 0x5637d6360e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 ms_handle_reset con 0x5637d8b4a400 session 0x5637d6344c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 ms_handle_reset con 0x5637dbfc0000 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:30.986704+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153280512 unmapped: 43024384 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 ms_handle_reset con 0x5637dbfc0000 session 0x5637d89bd180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 ms_handle_reset con 0x5637dbd20400 session 0x5637d856b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:31.986884+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 42999808 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:32.987081+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153305088 unmapped: 42999808 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.523820877s of 12.328663826s, submitted: 267
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 ms_handle_reset con 0x5637d8515400 session 0x5637d8618540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f7f66000/0x0/0x4ffc00000, data 0x19edea3/0x1be6000, compress 0x0/0x0/0x0, omap 0x49036, meta 0x6066fca), peers [0,2] op hist [1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 ms_handle_reset con 0x5637d8e64400 session 0x5637d55ba1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2334810 data_alloc: 218103808 data_used: 7428674
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 ms_handle_reset con 0x5637d78c2400 session 0x5637d525d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:33.987244+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 42991616 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f7f62000/0x0/0x4ffc00000, data 0x19efa15/0x1be8000, compress 0x0/0x0/0x0, omap 0x490ba, meta 0x6066f46), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 heartbeat osd_stat(store_statfs(0x4f7f62000/0x0/0x4ffc00000, data 0x19efa15/0x1be8000, compress 0x0/0x0/0x0, omap 0x490ba, meta 0x6066f46), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:34.987468+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 42991616 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 361 ms_handle_reset con 0x5637d6511800 session 0x5637d57901c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 361 ms_handle_reset con 0x5637d8515400 session 0x5637d8320700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:35.987638+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 42983424 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 361 handle_osd_map epochs [361,362], i have 361, src has [1,362]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 361 handle_osd_map epochs [362,362], i have 362, src has [1,362]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 362 ms_handle_reset con 0x5637d8e64400 session 0x5637d5737a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:36.987763+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 362 ms_handle_reset con 0x5637d78c2400 session 0x5637d651e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153321472 unmapped: 42983424 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 362 ms_handle_reset con 0x5637dbd20400 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 362 handle_osd_map epochs [363,363], i have 362, src has [1,363]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637dbd20400 session 0x5637d57376c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637d6511800 session 0x5637d5567340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637d78c2400 session 0x5637d842d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:37.987917+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153337856 unmapped: 42967040 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2339449 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:38.988136+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153378816 unmapped: 42926080 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637d8515400 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 heartbeat osd_stat(store_statfs(0x4f7f5d000/0x0/0x4ffc00000, data 0x19f4a37/0x1bed000, compress 0x0/0x0/0x0, omap 0x49e77, meta 0x6066189), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:39.988287+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153378816 unmapped: 42926080 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637dbfc0000 session 0x5637d7e8fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 ms_handle_reset con 0x5637dbfc0000 session 0x5637d5262380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:40.988424+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153378816 unmapped: 42926080 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 364 ms_handle_reset con 0x5637d8e64400 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 364 heartbeat osd_stat(store_statfs(0x4f7f60000/0x0/0x4ffc00000, data 0x19f49d5/0x1bec000, compress 0x0/0x0/0x0, omap 0x49efb, meta 0x6066105), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 364 ms_handle_reset con 0x5637d6511800 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:41.988570+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153378816 unmapped: 42926080 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 364 handle_osd_map epochs [364,365], i have 364, src has [1,365]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f7f5b000/0x0/0x4ffc00000, data 0x19f65a9/0x1bef000, compress 0x0/0x0/0x0, omap 0x49efb, meta 0x6066105), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:42.988800+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d78c2400 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f7f56000/0x0/0x4ffc00000, data 0x19f8209/0x1bf2000, compress 0x0/0x0/0x0, omap 0x4a003, meta 0x6065ffd), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2344864 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:43.988955+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.029296875s of 11.009564400s, submitted: 158
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d8515400 session 0x5637d5f64000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d8515400 session 0x5637d5567500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:44.989153+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d6511800 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d78c2400 session 0x5637d8010700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:45.989261+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637d8e64400 session 0x5637d83216c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 heartbeat osd_stat(store_statfs(0x4f7f59000/0x0/0x4ffc00000, data 0x19f826b/0x1bf3000, compress 0x0/0x0/0x0, omap 0x4a18f, meta 0x6065e71), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 ms_handle_reset con 0x5637dbfc0000 session 0x5637d57908c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:46.989405+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 365 handle_osd_map epochs [365,366], i have 366, src has [1,366]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbfc0000 session 0x5637d648d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d6511800 session 0x5637d592ac40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:47.989546+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d78c2400 session 0x5637d592c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2353824 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:48.989724+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8515400 session 0x5637d55bb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8e64400 session 0x5637d78d3180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:49.989983+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8e64400 session 0x5637d7e8e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f7f52000/0x0/0x4ffc00000, data 0x19f9d5c/0x1bf8000, compress 0x0/0x0/0x0, omap 0x4a95c, meta 0x60656a4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:50.990142+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153387008 unmapped: 42917888 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d6511800 session 0x5637d651f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8515400 session 0x5637d8618540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d78c2400 session 0x5637d5567340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:51.990275+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153395200 unmapped: 42909696 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbfc0000 session 0x5637d89bdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbd20400 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbfc0000 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d6511800 session 0x5637d5e4a380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:52.990536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f7f56000/0x0/0x4ffc00000, data 0x19f9cea/0x1bf6000, compress 0x0/0x0/0x0, omap 0x4a253, meta 0x6065dad), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2352348 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d78c2400 session 0x5637d8010c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:53.990693+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.730365753s of 10.129381180s, submitted: 80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8e64400 session 0x5637d6344fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8515400 session 0x5637d7cff500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:54.990882+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d8e64400 session 0x5637d7cfe1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d6511800 session 0x5637d5e4bdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637d78c2400 session 0x5637d6333880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbd20400 session 0x5637d6344540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 ms_handle_reset con 0x5637dbd20400 session 0x5637d636f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:55.991080+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 heartbeat osd_stat(store_statfs(0x4f7f57000/0x0/0x4ffc00000, data 0x19f9c88/0x1bf5000, compress 0x0/0x0/0x0, omap 0x4a6e8, meta 0x6065918), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:56.991297+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d6511800 session 0x5637d8d0d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:57.991441+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2357767 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:58.991618+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153460736 unmapped: 42844160 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:59.991802+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d8515400 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d78c2400 session 0x5637d525da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 42819584 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:00.992115+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153485312 unmapped: 42819584 heap: 196304896 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:01.992341+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f5f53000/0x0/0x4ffc00000, data 0x39fb834/0x3bf9000, compress 0x0/0x0/0x0, omap 0x4ad9a, meta 0x6065266), peers [0,2] op hist [0,0,0,0,0,0,1,0,1,0,12])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 212205568 unmapped: 34488320 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:02.992512+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 212230144 unmapped: 34463744 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2710103 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:03.992648+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.066921234s of 10.250916481s, submitted: 57
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:04.992792+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:05.992982+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f3353000/0x0/0x4ffc00000, data 0x65fb834/0x67f9000, compress 0x0/0x0/0x0, omap 0x4ad9a, meta 0x6065266), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:06.993188+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:07.993346+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:08.993500+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2809171 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 heartbeat osd_stat(store_statfs(0x4f2b53000/0x0/0x4ffc00000, data 0x6dfb834/0x6ff9000, compress 0x0/0x0/0x0, omap 0x4ad9a, meta 0x6065266), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 157704192 unmapped: 88989696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:09.993685+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 166191104 unmapped: 80502784 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:10.993891+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 153608192 unmapped: 93085696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:11.994089+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 162054144 unmapped: 84639744 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:12.994251+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 162086912 unmapped: 84606976 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57e6000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d57e6000 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:13.994383+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3732119 data_alloc: 218103808 data_used: 7430854
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 166354944 unmapped: 80338944 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 heartbeat osd_stat(store_statfs(0x4e7f53000/0x0/0x4ffc00000, data 0x119fb834/0x11bf9000, compress 0x0/0x0/0x0, omap 0x4ae1e, meta 0x60651e2), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,3])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:14.994567+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 3.701840401s of 10.345557213s, submitted: 48
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 166354944 unmapped: 80338944 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:15.994696+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 166559744 unmapped: 80134144 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d78c2400 session 0x5637d8321c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 heartbeat osd_stat(store_statfs(0x4e5753000/0x0/0x4ffc00000, data 0x141fb834/0x143f9000, compress 0x0/0x0/0x0, omap 0x4ae1e, meta 0x60651e2), peers [0,2] op hist [0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:16.994895+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637dbfc0000 session 0x5637d5567340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d8515400 session 0x5637d78d3c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 ms_handle_reset con 0x5637d8e64400 session 0x5637d8619340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154116096 unmapped: 92577792 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 368 ms_handle_reset con 0x5637dbd20400 session 0x5637d592c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 368 ms_handle_reset con 0x5637d78c2400 session 0x5637d57ac1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 368 ms_handle_reset con 0x5637dbd20400 session 0x5637d648c1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:17.995135+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 92545024 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:18.995349+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4191938 data_alloc: 218103808 data_used: 7430870
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 92545024 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:19.995568+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 ms_handle_reset con 0x5637d8515400 session 0x5637d5263340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154271744 unmapped: 92422144 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 ms_handle_reset con 0x5637d6511800 session 0x5637d6360540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 ms_handle_reset con 0x5637d8e64400 session 0x5637d842d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 ms_handle_reset con 0x5637d8e64400 session 0x5637d57addc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 heartbeat osd_stat(store_statfs(0x4e2b4c000/0x0/0x4ffc00000, data 0x16dfd921/0x16ffe000, compress 0x0/0x0/0x0, omap 0x4aac4, meta 0x606553c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:20.995754+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 heartbeat osd_stat(store_statfs(0x4f7f4b000/0x0/0x4ffc00000, data 0x19fefbe/0x1bff000, compress 0x0/0x0/0x0, omap 0x4aac4, meta 0x606553c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 154501120 unmapped: 92192768 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 ms_handle_reset con 0x5637d6511800 session 0x5637d8321180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:21.995958+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 ms_handle_reset con 0x5637d78c2400 session 0x5637d6333340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 155582464 unmapped: 91111424 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 ms_handle_reset con 0x5637dbd20400 session 0x5637d89bd340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:22.996225+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160858112 unmapped: 85835776 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:23.996417+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2726985 data_alloc: 218103808 data_used: 7431565
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 173473792 unmapped: 73220096 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5774c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:24.996970+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 ms_handle_reset con 0x5637d5774c00 session 0x5637d58628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.747241497s of 10.028290749s, submitted: 177
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 ms_handle_reset con 0x5637d6511800 session 0x5637d80621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 156852224 unmapped: 89841664 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 heartbeat osd_stat(store_statfs(0x4f134a000/0x0/0x4ffc00000, data 0x8600bae/0x8802000, compress 0x0/0x0/0x0, omap 0x4ac94, meta 0x606536c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:25.997416+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 165257216 unmapped: 81436672 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 ms_handle_reset con 0x5637d78c2400 session 0x5637d52636c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:26.997925+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 167477248 unmapped: 79216640 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0xbe00c10/0xc003000, compress 0x0/0x0/0x0, omap 0x4a894, meta 0x606576c), peers [0,2] op hist [0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:27.998703+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 175939584 unmapped: 70754304 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:28.999348+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637d8e64400 session 0x5637d5566000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3808340 data_alloc: 218103808 data_used: 6907375
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 171827200 unmapped: 74866688 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637dbd20400 session 0x5637d57ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:29.999736+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 167747584 unmapped: 78946304 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:30.999951+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637d51fb000 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 heartbeat osd_stat(store_statfs(0x4e2585000/0x0/0x4ffc00000, data 0x173c462d/0x175c7000, compress 0x0/0x0/0x0, omap 0x4a55a, meta 0x6065aa6), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 164315136 unmapped: 82378752 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637d6511800 session 0x5637d5f64fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637dbfc0000 session 0x5637d592aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 ms_handle_reset con 0x5637d8515400 session 0x5637d8619dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:32.000211+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160071680 unmapped: 86622208 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 371 handle_osd_map epochs [372,372], i have 371, src has [1,372]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:33.000366+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 ms_handle_reset con 0x5637d78c2400 session 0x5637d63eb340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160137216 unmapped: 86556672 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:34.000683+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 heartbeat osd_stat(store_statfs(0x4e0581000/0x0/0x4ffc00000, data 0x193c61bb/0x195c9000, compress 0x0/0x0/0x0, omap 0x49e31, meta 0x60661cf), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 ms_handle_reset con 0x5637d8e64400 session 0x5637d55bba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4520499 data_alloc: 218103808 data_used: 6907179
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160137216 unmapped: 86556672 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:35.001536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.470745564s of 10.081172943s, submitted: 193
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160407552 unmapped: 86286336 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 heartbeat osd_stat(store_statfs(0x4e0580000/0x0/0x4ffc00000, data 0x193c61cb/0x195ca000, compress 0x0/0x0/0x0, omap 0x4a48c, meta 0x6065b74), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:36.001755+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d6511800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 ms_handle_reset con 0x5637d78c2400 session 0x5637d651e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160407552 unmapped: 86286336 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:37.002111+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 ms_handle_reset con 0x5637d8e64400 session 0x5637d8552380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 373 ms_handle_reset con 0x5637d8515400 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160448512 unmapped: 86245376 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637dbfc0000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d6511800 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:38.002430+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160473088 unmapped: 86220800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c2400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:39.002627+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4534497 data_alloc: 218103808 data_used: 6907796
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 heartbeat osd_stat(store_statfs(0x4e0537000/0x0/0x4ffc00000, data 0x19409ebb/0x19613000, compress 0x0/0x0/0x0, omap 0x4a7fb, meta 0x6065805), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160473088 unmapped: 86220800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:40.002914+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 160473088 unmapped: 86220800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d8515400 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d8e64400 session 0x5637d57ada40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637dbfc0000 session 0x5637d8552a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637dbd20400 session 0x5637d651e380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d78c3c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:41.003096+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d78c3c00 session 0x5637d8010c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d8515400 session 0x5637d55ba000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637d8e64400 session 0x5637d57368c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637dbd20400 session 0x5637d5f64fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 ms_handle_reset con 0x5637dbfc0000 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161652736 unmapped: 85041152 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 374 handle_osd_map epochs [375,375], i have 374, src has [1,375]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 375 ms_handle_reset con 0x5637d78c2400 session 0x5637d63456c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:42.003218+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 85024768 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:43.003443+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161669120 unmapped: 85024768 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 375 handle_osd_map epochs [376,376], i have 375, src has [1,376]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 376 ms_handle_reset con 0x5637d8515400 session 0x5637d5e4a380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 376 ms_handle_reset con 0x5637dbd20400 session 0x5637d651e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 376 ms_handle_reset con 0x5637d8e64400 session 0x5637d6333880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:44.003657+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4595562 data_alloc: 218103808 data_used: 6911892
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 376 ms_handle_reset con 0x5637dbfc0000 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161701888 unmapped: 84992000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 376 handle_osd_map epochs [376,377], i have 376, src has [1,377]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4dfcc9000/0x0/0x4ffc00000, data 0x19c74603/0x19e81000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8612800 session 0x5637d8552700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637dc37e000 session 0x5637d8619180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:45.003934+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8515400 session 0x5637d8d0ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8e64400 session 0x5637d5229dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 161710080 unmapped: 84983808 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:46.004083+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.546490669s of 10.740095139s, submitted: 56
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 165920768 unmapped: 80773120 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4dfcc2000/0x0/0x4ffc00000, data 0x19c766a1/0x19e86000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,3])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:47.004245+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 165535744 unmapped: 81158144 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:48.004384+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 167550976 unmapped: 79142912 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:49.004546+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5243022 data_alloc: 234881024 data_used: 14584301
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 171974656 unmapped: 74719232 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:50.004770+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4d84c6000/0x0/0x4ffc00000, data 0x214766a1/0x21686000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 172179456 unmapped: 74514432 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:51.004923+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8b4bc00 session 0x5637d89bcc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d5771000 session 0x5637d52621c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8515400 session 0x5637d63eae00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8b4bc00 session 0x5637d5f436c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 172548096 unmapped: 74145792 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:52.005195+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 164208640 unmapped: 82485248 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4d40c6000/0x0/0x4ffc00000, data 0x258766a1/0x25a86000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [0,0,0,0,0,1,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:53.005361+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8e64400 session 0x5637d5e4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 173031424 unmapped: 73662464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637dc37e000 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:54.008175+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5972843 data_alloc: 234881024 data_used: 14584301
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4d08b5000/0x0/0x4ffc00000, data 0x29086703/0x29297000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 169017344 unmapped: 77676544 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:55.008324+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 173334528 unmapped: 73359360 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:56.008462+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.391058922s of 10.019022942s, submitted: 98
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 165298176 unmapped: 81395712 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:57.008630+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 165404672 unmapped: 81289216 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:58.008778+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 169664512 unmapped: 77029376 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:59.008875+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4cacb5000/0x0/0x4ffc00000, data 0x2ec86703/0x2ee97000, compress 0x0/0x0/0x0, omap 0x4a79b, meta 0x6065865), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6465743 data_alloc: 234881024 data_used: 14596589
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 169672704 unmapped: 77021184 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:00.009296+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d87ab400 session 0x5637d651f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637d8b4a800 session 0x5637d5566e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 ms_handle_reset con 0x5637dbd20c00 session 0x5637d6344000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 167944192 unmapped: 78749696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 heartbeat osd_stat(store_statfs(0x4ca8b4000/0x0/0x4ffc00000, data 0x2f086726/0x2f298000, compress 0x0/0x0/0x0, omap 0x4a9a5, meta 0x606565b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:01.009436+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 168001536 unmapped: 78692352 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 378 ms_handle_reset con 0x5637d8b4bc00 session 0x5637d63dfa40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:02.009664+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 70320128 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:03.009788+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 70320128 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 378 heartbeat osd_stat(store_statfs(0x4c9466000/0x0/0x4ffc00000, data 0x304c2e24/0x306d5000, compress 0x0/0x0/0x0, omap 0x4a9a5, meta 0x606565b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 378 handle_osd_map epochs [378,379], i have 379, src has [1,379]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8e64400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 ms_handle_reset con 0x5637d8e64400 session 0x5637d5e4ba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:04.009968+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 ms_handle_reset con 0x5637dc37e000 session 0x5637d636f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6653431 data_alloc: 234881024 data_used: 23187949
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 70295552 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:05.010116+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176398336 unmapped: 70295552 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 ms_handle_reset con 0x5637d8b4a800 session 0x5637d63df6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:06.010235+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.816649437s of 10.127748489s, submitted: 121
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 ms_handle_reset con 0x5637d8b4bc00 session 0x5637d63ea8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176611328 unmapped: 70082560 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:07.010350+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 ms_handle_reset con 0x5637d8b49800 session 0x5637d57368c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176611328 unmapped: 70082560 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:08.010557+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 heartbeat osd_stat(store_statfs(0x4c9473000/0x0/0x4ffc00000, data 0x304c4522/0x306d7000, compress 0x0/0x0/0x0, omap 0x4b275, meta 0x6064d8b), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 379 handle_osd_map epochs [380,380], i have 380, src has [1,380]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176635904 unmapped: 70057984 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:09.011002+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6654669 data_alloc: 234881024 data_used: 23712237
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d5e4aa80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 ms_handle_reset con 0x5637dbd20000 session 0x5637d57ada40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176553984 unmapped: 70139904 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 ms_handle_reset con 0x5637dbd20000 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d5e4ba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:10.011267+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 70123520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:11.011393+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 70123520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 ms_handle_reset con 0x5637d8b49c00 session 0x5637d63eb340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 380 handle_osd_map epochs [380,381], i have 380, src has [1,381]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:12.011524+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 381 ms_handle_reset con 0x5637d5514000 session 0x5637d6333dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637dbfc0c00 session 0x5637d55bac40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d8434400 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177627136 unmapped: 69066752 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:13.011635+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177627136 unmapped: 69066752 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:14.011768+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 heartbeat osd_stat(store_statfs(0x4c8865000/0x0/0x4ffc00000, data 0x310ca80d/0x312e3000, compress 0x0/0x0/0x0, omap 0x4af84, meta 0x606507c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6741503 data_alloc: 234881024 data_used: 23735037
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 68984832 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:15.011905+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181886976 unmapped: 64806912 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:16.012122+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.240189552s of 10.010190964s, submitted: 140
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181075968 unmapped: 65617920 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:17.018485+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183156736 unmapped: 63537152 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:18.018629+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183156736 unmapped: 63537152 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:19.018796+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6834293 data_alloc: 234881024 data_used: 25264893
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183156736 unmapped: 63537152 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:20.019018+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 heartbeat osd_stat(store_statfs(0x4c7ce5000/0x0/0x4ffc00000, data 0x31c4080d/0x31e59000, compress 0x0/0x0/0x0, omap 0x4af84, meta 0x606507c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 63504384 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:21.019215+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 63504384 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:22.019408+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 63504384 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:23.019575+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d5514000 session 0x5637d856afc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183189504 unmapped: 63504384 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d8b49c00 session 0x5637d636f180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637dbd20000 session 0x5637d89bc380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:24.019717+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6834293 data_alloc: 234881024 data_used: 25264893
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185376768 unmapped: 61317120 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637dbd20000 session 0x5637d7e8f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d651e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d8434400 session 0x5637d78d3c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637d8b49c00 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:25.019839+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 ms_handle_reset con 0x5637dbd21000 session 0x5637d5567180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 heartbeat osd_stat(store_statfs(0x4c7ce4000/0x0/0x4ffc00000, data 0x31c4081d/0x31e5a000, compress 0x0/0x0/0x0, omap 0x4af84, meta 0x606507c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 64995328 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:26.019993+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181698560 unmapped: 64995328 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:27.020131+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.946816444s of 11.103519440s, submitted: 44
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 383 ms_handle_reset con 0x5637dbd21000 session 0x5637d63dea80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 64978944 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:28.020319+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 64978944 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:29.020449+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6871997 data_alloc: 234881024 data_used: 25269005
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181714944 unmapped: 64978944 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:30.020641+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 383 heartbeat osd_stat(store_statfs(0x4c7758000/0x0/0x4ffc00000, data 0x321d73b9/0x323f2000, compress 0x0/0x0/0x0, omap 0x4b1e7, meta 0x6064e19), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181723136 unmapped: 64970752 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:31.020789+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 383 handle_osd_map epochs [384,384], i have 383, src has [1,384]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 384 ms_handle_reset con 0x5637d8434400 session 0x5637d7cfe1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 384 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d525d340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181755904 unmapped: 64937984 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7719400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ad000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:32.020933+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 ms_handle_reset con 0x5637d8b49c00 session 0x5637d592c380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 heartbeat osd_stat(store_statfs(0x4c7755000/0x0/0x4ffc00000, data 0x321d8f55/0x323f5000, compress 0x0/0x0/0x0, omap 0x4ad21, meta 0x60652df), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 182419456 unmapped: 64274432 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:33.021073+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 ms_handle_reset con 0x5637d57d0400 session 0x5637d89bdc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 heartbeat osd_stat(store_statfs(0x4c7750000/0x0/0x4ffc00000, data 0x321daaf1/0x323f8000, compress 0x0/0x0/0x0, omap 0x4ad21, meta 0x60652df), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187736064 unmapped: 58957824 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:34.021204+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6914470 data_alloc: 251658240 data_used: 31109901
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187777024 unmapped: 58916864 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:35.021339+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187777024 unmapped: 58916864 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:36.021496+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187777024 unmapped: 58916864 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:37.021648+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 heartbeat osd_stat(store_statfs(0x4c7750000/0x0/0x4ffc00000, data 0x321daaf1/0x323f8000, compress 0x0/0x0/0x0, omap 0x4ad21, meta 0x60652df), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.644440651s of 10.518565178s, submitted: 13
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187809792 unmapped: 58884096 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:38.021802+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 386 ms_handle_reset con 0x5637d57d0400 session 0x5637d6344540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 58867712 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:39.021933+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6917036 data_alloc: 251658240 data_used: 31118093
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187826176 unmapped: 58867712 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:40.022139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 386 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d78d3340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 386 handle_osd_map epochs [386,387], i have 386, src has [1,387]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 387 ms_handle_reset con 0x5637d8434400 session 0x5637d55ba8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 58851328 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:41.022294+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 387 ms_handle_reset con 0x5637d8b49c00 session 0x5637d5e4ae00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 187842560 unmapped: 58851328 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:42.022457+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 387 heartbeat osd_stat(store_statfs(0x4c774c000/0x0/0x4ffc00000, data 0x321de2e1/0x323fe000, compress 0x0/0x0/0x0, omap 0x4af84, meta 0x606507c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 188006400 unmapped: 58687488 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:43.022600+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 387 handle_osd_map epochs [387,388], i have 388, src has [1,388]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 188080128 unmapped: 58613760 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:44.022741+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6923480 data_alloc: 251658240 data_used: 31386381
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 ms_handle_reset con 0x5637dbd21000 session 0x5637d8553880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 188096512 unmapped: 58597376 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:45.022859+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 189964288 unmapped: 56729600 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:46.022977+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 189964288 unmapped: 56729600 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:47.023121+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 heartbeat osd_stat(store_statfs(0x4c741d000/0x0/0x4ffc00000, data 0x328a5f09/0x3272d000, compress 0x0/0x0/0x0, omap 0x4b304, meta 0x6064cfc), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.128515720s of 10.040530205s, submitted: 74
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191438848 unmapped: 55255040 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:48.023259+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 ms_handle_reset con 0x5637d5514000 session 0x5637d8552700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 ms_handle_reset con 0x5637dc146000 session 0x5637d8d0da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 handle_osd_map epochs [388,389], i have 388, src has [1,389]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 388 handle_osd_map epochs [389,389], i have 389, src has [1,389]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191463424 unmapped: 55230464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:49.023414+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 heartbeat osd_stat(store_statfs(0x4c71f0000/0x0/0x4ffc00000, data 0x32ad19a4/0x3295a000, compress 0x0/0x0/0x0, omap 0x4b304, meta 0x6064cfc), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 6993122 data_alloc: 251658240 data_used: 31374093
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637dbd20400 session 0x5637d5f65340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637dbfc0000 session 0x5637d63de540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191512576 unmapped: 55181312 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:50.023604+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637d8515400 session 0x5637d5736a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637d87ab400 session 0x5637d6097180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 heartbeat osd_stat(store_statfs(0x4c71c2000/0x0/0x4ffc00000, data 0x32b019a4/0x3298a000, compress 0x0/0x0/0x0, omap 0x4b5ed, meta 0x6064a13), peers [0,2] op hist [0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637d5514000 session 0x5637d8d0da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 ms_handle_reset con 0x5637d8515400 session 0x5637d5f65500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179421184 unmapped: 67272704 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:51.023740+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179781632 unmapped: 66912256 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:52.023858+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179781632 unmapped: 66912256 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:53.024108+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179830784 unmapped: 66863104 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:54.024315+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 ms_handle_reset con 0x5637dbd20400 session 0x5637d58b4380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6660253 data_alloc: 234881024 data_used: 14406264
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 heartbeat osd_stat(store_statfs(0x4c95c1000/0x0/0x4ffc00000, data 0x3070337e/0x30589000, compress 0x0/0x0/0x0, omap 0x4adba, meta 0x6065246), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,5])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179838976 unmapped: 66854912 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:55.024537+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179855360 unmapped: 66838528 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:56.024663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 ms_handle_reset con 0x5637dbfc0000 session 0x5637d57ad500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179855360 unmapped: 66838528 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:57.024827+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179863552 unmapped: 66830336 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:58.025023+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 6.628746986s of 10.202033997s, submitted: 170
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 heartbeat osd_stat(store_statfs(0x4c9f82000/0x0/0x4ffc00000, data 0x2fd4437e/0x2fbca000, compress 0x0/0x0/0x0, omap 0x4ae3e, meta 0x60651c2), peers [0,2] op hist [0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 ms_handle_reset con 0x5637d8515400 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179863552 unmapped: 66830336 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 handle_osd_map epochs [390,391], i have 390, src has [1,391]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 390 handle_osd_map epochs [391,391], i have 391, src has [1,391]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:59.025209+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6633644 data_alloc: 234881024 data_used: 14406264
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 ms_handle_reset con 0x5637d5514000 session 0x5637d8063500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 ms_handle_reset con 0x5637d87ab400 session 0x5637d7e8e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 heartbeat osd_stat(store_statfs(0x4c9f7c000/0x0/0x4ffc00000, data 0x2fd45f7c/0x2fbce000, compress 0x0/0x0/0x0, omap 0x4b1d6, meta 0x6064e2a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179863552 unmapped: 66830336 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:00.025415+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 ms_handle_reset con 0x5637dbd20400 session 0x5637d8553180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 ms_handle_reset con 0x5637dc146000 session 0x5637d78d3500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 ms_handle_reset con 0x5637d57d0400 session 0x5637d592a000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 391 handle_osd_map epochs [391,392], i have 391, src has [1,392]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 392 ms_handle_reset con 0x5637dc146000 session 0x5637d651f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179896320 unmapped: 66797568 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:01.025544+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 66781184 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 392 handle_osd_map epochs [392,393], i have 392, src has [1,393]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:02.025716+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637d5514000 session 0x5637d8320c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 heartbeat osd_stat(store_statfs(0x4c9f7c000/0x0/0x4ffc00000, data 0x2fd47b0a/0x2fbd0000, compress 0x0/0x0/0x0, omap 0x4b71f, meta 0x60648e1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 66764800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:03.025864+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 66764800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:04.026070+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6638233 data_alloc: 234881024 data_used: 14406861
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 66764800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:05.026149+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 66764800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:06.026306+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 heartbeat osd_stat(store_statfs(0x4c9f77000/0x0/0x4ffc00000, data 0x2fd496c2/0x2fbd3000, compress 0x0/0x0/0x0, omap 0x4b7d3, meta 0x606482d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179929088 unmapped: 66764800 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:07.026425+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 heartbeat osd_stat(store_statfs(0x4c9f77000/0x0/0x4ffc00000, data 0x2fd496c2/0x2fbd3000, compress 0x0/0x0/0x0, omap 0x4b7d3, meta 0x606482d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637d8515400 session 0x5637d8d0c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637d87ab400 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637d5514000 session 0x5637d5228c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637d57d0400 session 0x5637d7cfec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637dc146000 session 0x5637d55bbc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 66625536 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:08.026569+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 ms_handle_reset con 0x5637dbd20400 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.272955894s of 10.072507858s, submitted: 48
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:09.026767+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180895744 unmapped: 65798144 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d7e8e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d5514000 session 0x5637d5262380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d57d0400 session 0x5637d5e4bc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637dbd20400 session 0x5637d5229880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6664075 data_alloc: 234881024 data_used: 14406861
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:10.027064+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179208192 unmapped: 67485696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:11.027217+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179208192 unmapped: 67485696 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637dbd20000 session 0x5637d5e4b880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d7719400 session 0x5637d5566fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d87ad000 session 0x5637d5791180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 heartbeat osd_stat(store_statfs(0x4c9dc0000/0x0/0x4ffc00000, data 0x2fefd1b3/0x2fd8a000, compress 0x0/0x0/0x0, omap 0x4b636, meta 0x60649ca), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d5514000 session 0x5637d8618540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:12.027372+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 72171520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 heartbeat osd_stat(store_statfs(0x4ca358000/0x0/0x4ffc00000, data 0x2f9681a3/0x2f7f4000, compress 0x0/0x0/0x0, omap 0x4b170, meta 0x6064e90), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 heartbeat osd_stat(store_statfs(0x4ca358000/0x0/0x4ffc00000, data 0x2f9681a3/0x2f7f4000, compress 0x0/0x0/0x0, omap 0x5bbaa, meta 0x6054456), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:13.027519+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 72171520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:14.027653+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 72171520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d57d0400 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6591563 data_alloc: 218103808 data_used: 8562125
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:15.027805+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174522368 unmapped: 72171520 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637dbd20400 session 0x5637d6361c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 heartbeat osd_stat(store_statfs(0x4ca358000/0x0/0x4ffc00000, data 0x2f9681a3/0x2f7f4000, compress 0x0/0x0/0x0, omap 0x5bc2e, meta 0x60543d2), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637dbd20400 session 0x5637d8619a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637d5514000 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:16.027945+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d0400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7719400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:17.028097+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ad000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:18.028249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.060918808s of 10.050376892s, submitted: 89
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:19.028386+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 ms_handle_reset con 0x5637dbd20000 session 0x5637d8d0c380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6598921 data_alloc: 218103808 data_used: 9124317
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:20.028545+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 heartbeat osd_stat(store_statfs(0x4ca32e000/0x0/0x4ffc00000, data 0x2f9921a3/0x2f81e000, compress 0x0/0x0/0x0, omap 0x5bc2e, meta 0x60543d2), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:21.028716+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 174874624 unmapped: 71819264 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 394 handle_osd_map epochs [394,395], i have 395, src has [1,395]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 395 ms_handle_reset con 0x5637dc146000 session 0x5637d6332540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 395 heartbeat osd_stat(store_statfs(0x4ca32e000/0x0/0x4ffc00000, data 0x2f9921a3/0x2f81e000, compress 0x0/0x0/0x0, omap 0x5bc2e, meta 0x60543d2), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:22.028907+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 175243264 unmapped: 71450624 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:23.031164+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 175243264 unmapped: 71450624 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:24.031301+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 heartbeat osd_stat(store_statfs(0x4c9af1000/0x0/0x4ffc00000, data 0x3065e8db/0x30057000, compress 0x0/0x0/0x0, omap 0x5c659, meta 0x60539a7), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6704958 data_alloc: 218103808 data_used: 9128413
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:25.031461+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 heartbeat osd_stat(store_statfs(0x4c9af3000/0x0/0x4ffc00000, data 0x3065e8db/0x30057000, compress 0x0/0x0/0x0, omap 0x5c659, meta 0x60539a7), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:26.031631+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:27.031751+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:28.031935+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:29.032126+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.065967560s of 11.257816315s, submitted: 71
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6707114 data_alloc: 218103808 data_used: 9187805
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:30.032371+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176103424 unmapped: 70590464 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 heartbeat osd_stat(store_statfs(0x4c9af3000/0x0/0x4ffc00000, data 0x3065e8db/0x30057000, compress 0x0/0x0/0x0, omap 0x5c659, meta 0x60539a7), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 ms_handle_reset con 0x5637d8434400 session 0x5637d63de540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:31.032517+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176521216 unmapped: 70172672 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:32.032641+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 ms_handle_reset con 0x5637d87ad000 session 0x5637d5f656c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177586176 unmapped: 69107712 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:33.032764+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 396 handle_osd_map epochs [396,397], i have 397, src has [1,397]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 ms_handle_reset con 0x5637d8434400 session 0x5637d5f641c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177774592 unmapped: 68919296 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:34.201208+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 68886528 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6775899 data_alloc: 218103808 data_used: 9305565
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 ms_handle_reset con 0x5637dbd20000 session 0x5637d55bba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:35.201405+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 68886528 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 ms_handle_reset con 0x5637dbd20400 session 0x5637d5229dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:36.201517+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176889856 unmapped: 69804032 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 heartbeat osd_stat(store_statfs(0x4c924e000/0x0/0x4ffc00000, data 0x30f044cb/0x308fe000, compress 0x0/0x0/0x0, omap 0x5c761, meta 0x605389f), peers [0,2] op hist [0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:37.201669+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 69763072 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:38.201846+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 69763072 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 ms_handle_reset con 0x5637dc146000 session 0x5637d525c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 ms_handle_reset con 0x5637dc146000 session 0x5637d8552a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:39.202073+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176930816 unmapped: 69763072 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 397 handle_osd_map epochs [397,398], i have 397, src has [1,398]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.798519135s of 10.318253517s, submitted: 117
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6892821 data_alloc: 218103808 data_used: 9309661
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:40.202233+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177020928 unmapped: 69672960 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:41.202410+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177029120 unmapped: 69664768 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 399 ms_handle_reset con 0x5637d8b49c00 session 0x5637d5262380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 399 heartbeat osd_stat(store_statfs(0x4c8ce1000/0x0/0x4ffc00000, data 0x30fd2af4/0x30e64000, compress 0x0/0x0/0x0, omap 0x5ccf6, meta 0x605330a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:42.202540+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177037312 unmapped: 69656576 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 399 ms_handle_reset con 0x5637d8434400 session 0x5637d57361c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:43.202710+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177037312 unmapped: 69656576 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:44.202860+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177037312 unmapped: 69656576 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6759915 data_alloc: 218103808 data_used: 9309933
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ad000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 399 handle_osd_map epochs [400,400], i have 399, src has [1,400]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:45.202979+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 ms_handle_reset con 0x5637d87ad000 session 0x5637d5f436c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 69632000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:46.203121+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 69632000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:47.203269+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 69632000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 heartbeat osd_stat(store_statfs(0x4c8ce1000/0x0/0x4ffc00000, data 0x30fd570e/0x30e69000, compress 0x0/0x0/0x0, omap 0x5cd4c, meta 0x60532b4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:48.203484+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 69632000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 ms_handle_reset con 0x5637dbd20400 session 0x5637d651e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 ms_handle_reset con 0x5637dbd20000 session 0x5637d57ad340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:49.203643+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177061888 unmapped: 69632000 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ad000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.091318130s of 10.015619278s, submitted: 67
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6769532 data_alloc: 218103808 data_used: 9314029
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:50.203831+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 214908928 unmapped: 31784960 heap: 246693888 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:51.203984+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 219103232 unmapped: 31793152 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 heartbeat osd_stat(store_statfs(0x4c6ce1000/0x0/0x4ffc00000, data 0x32fd5780/0x32e6b000, compress 0x0/0x0/0x0, omap 0x5c627, meta 0x60539d9), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1,9])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 400 handle_osd_map epochs [401,401], i have 401, src has [1,401]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:52.204295+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 189816832 unmapped: 61079552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b49c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:53.206225+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178298880 unmapped: 72597504 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc146000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:54.206423+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180625408 unmapped: 70270976 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7059736 data_alloc: 218103808 data_used: 9314301
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 401 heartbeat osd_stat(store_statfs(0x4c58de000/0x0/0x4ffc00000, data 0x343d71ff/0x3426e000, compress 0x0/0x0/0x0, omap 0x5c6e9, meta 0x6053917), peers [0,2] op hist [0,0,0,0,0,1,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:55.206622+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180649984 unmapped: 70246400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:56.206824+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180690944 unmapped: 70205440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 401 handle_osd_map epochs [401,402], i have 402, src has [1,402]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:57.206940+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176545792 unmapped: 74350592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:58.207089+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180797440 unmapped: 70098944 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 402 ms_handle_reset con 0x5637dc146000 session 0x5637d8010700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 402 heartbeat osd_stat(store_statfs(0x4c30db000/0x0/0x4ffc00000, data 0x36bd8d9b/0x36a71000, compress 0x0/0x0/0x0, omap 0x5cd86, meta 0x605327a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:59.207221+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd1f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185139200 unmapped: 65757184 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 402 handle_osd_map epochs [402,403], i have 402, src has [1,403]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.405449390s of 10.063061714s, submitted: 52
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7432519 data_alloc: 218103808 data_used: 9314301
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:00.207511+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 403 ms_handle_reset con 0x5637d79aa400 session 0x5637d8619180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185344000 unmapped: 65552384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:01.207613+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181379072 unmapped: 69517312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 403 ms_handle_reset con 0x5637d87aec00 session 0x5637d5862540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 403 handle_osd_map epochs [403,404], i have 403, src has [1,404]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:02.207731+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190095360 unmapped: 60801024 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 403 handle_osd_map epochs [404,404], i have 404, src has [1,404]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:03.207851+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 404 heartbeat osd_stat(store_statfs(0x4bd0d3000/0x0/0x4ffc00000, data 0x3cbdc527/0x3ca77000, compress 0x0/0x0/0x0, omap 0x5cf7e, meta 0x6053082), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,2,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185966592 unmapped: 64929792 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:04.208007+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 404 ms_handle_reset con 0x5637d69ca000 session 0x5637d63dec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7820000 data_alloc: 218103808 data_used: 9314301
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:05.208109+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176693248 unmapped: 74203136 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 404 ms_handle_reset con 0x5637d69ca000 session 0x5637d6097180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 404 heartbeat osd_stat(store_statfs(0x4bbcd5000/0x0/0x4ffc00000, data 0x3dfdc527/0x3de77000, compress 0x0/0x0/0x0, omap 0x5d406, meta 0x6052bfa), peers [0,2] op hist [0,0,0,0,0,2])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:06.208228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185466880 unmapped: 65429504 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 405 ms_handle_reset con 0x5637d79aa400 session 0x5637d5863a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:07.208386+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 185819136 unmapped: 65077248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 405 ms_handle_reset con 0x5637d87aec00 session 0x5637d7e8f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:08.208526+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 186032128 unmapped: 64864256 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637dbd20000 session 0x5637d5736c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:09.208812+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177815552 unmapped: 73080832 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d87ad000 session 0x5637d651e540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 heartbeat osd_stat(store_statfs(0x4b44cd000/0x0/0x4ffc00000, data 0x457dfd27/0x4567d000, compress 0x0/0x0/0x0, omap 0x5d869, meta 0x6052797), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d8434400 session 0x5637d8d0cfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8590364 data_alloc: 218103808 data_used: 9314399
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d69ca000 session 0x5637d6332fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 2.550561190s of 10.188864708s, submitted: 147
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d79aa400 session 0x5637d8d0c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:10.208994+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177856512 unmapped: 73039872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:11.209113+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d8b49c00 session 0x5637d79c9dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177905664 unmapped: 72990720 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d57d0400 session 0x5637d63eae00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d7719400 session 0x5637d8011dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:12.209222+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d87aec00 session 0x5637d5737880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176570368 unmapped: 74326016 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d79aa400 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 ms_handle_reset con 0x5637d69ca000 session 0x5637d5863dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 heartbeat osd_stat(store_statfs(0x4c9756000/0x0/0x4ffc00000, data 0x3055cbe1/0x303f5000, compress 0x0/0x0/0x0, omap 0x5cdb3, meta 0x605324d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:13.209376+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176594944 unmapped: 74301440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 406 handle_osd_map epochs [406,407], i have 407, src has [1,407]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 407 ms_handle_reset con 0x5637d8434400 session 0x5637d8321a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:14.209494+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176373760 unmapped: 74522624 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6813916 data_alloc: 218103808 data_used: 9140009
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 407 handle_osd_map epochs [408,408], i have 407, src has [1,408]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 408 ms_handle_reset con 0x5637d8434400 session 0x5637d8321c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:15.209595+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 408 ms_handle_reset con 0x5637d69ca000 session 0x5637d3bf5340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179576832 unmapped: 71319552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 408 heartbeat osd_stat(store_statfs(0x4c95d8000/0x0/0x4ffc00000, data 0x3056529c/0x30400000, compress 0x0/0x0/0x0, omap 0x5c837, meta 0x60537c9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:16.209721+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7719400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 70778880 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:17.209860+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178618368 unmapped: 72278016 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 409 ms_handle_reset con 0x5637d79aa400 session 0x5637d57ad880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 409 heartbeat osd_stat(store_statfs(0x4c88c7000/0x0/0x4ffc00000, data 0x313eb23a/0x31285000, compress 0x0/0x0/0x0, omap 0x5c837, meta 0x60537c9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:18.210029+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 409 handle_osd_map epochs [410,410], i have 409, src has [1,410]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 410 ms_handle_reset con 0x5637d7719400 session 0x5637d7e8ec40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178642944 unmapped: 72253440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 410 ms_handle_reset con 0x5637d87aec00 session 0x5637d651f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87aec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:19.210194+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 410 handle_osd_map epochs [410,411], i have 410, src has [1,411]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178257920 unmapped: 72638464 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 411 ms_handle_reset con 0x5637d87aec00 session 0x5637d6333dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6823307 data_alloc: 218103808 data_used: 8280750
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 411 handle_osd_map epochs [411,412], i have 411, src has [1,412]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.106662750s of 10.086552620s, submitted: 321
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:20.210338+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7719400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79aa400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 412 ms_handle_reset con 0x5637d79aa400 session 0x5637d79c9dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 412 ms_handle_reset con 0x5637d7719400 session 0x5637d78d6e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 412 ms_handle_reset con 0x5637d8434400 session 0x5637d63de380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178282496 unmapped: 72613888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 412 handle_osd_map epochs [413,413], i have 412, src has [1,413]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 ms_handle_reset con 0x5637d69ca000 session 0x5637d5228c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:21.210468+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 ms_handle_reset con 0x5637d69ca000 session 0x5637d5791880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176865280 unmapped: 74031104 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 ms_handle_reset con 0x5637d88e7800 session 0x5637d8d0cfc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 ms_handle_reset con 0x5637d87ab400 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:22.210603+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177946624 unmapped: 72949760 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 ms_handle_reset con 0x5637d8514000 session 0x5637d55bb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 heartbeat osd_stat(store_statfs(0x4de8c0000/0x0/0x4ffc00000, data 0x1b051e5a/0x1b28c000, compress 0x0/0x0/0x0, omap 0x5ba1e, meta 0x60545e2), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 413 handle_osd_map epochs [414,414], i have 413, src has [1,414]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637dbd1f400 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637dc37e000 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:23.210744+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637d79ac000 session 0x5637d5e4b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637d69ca000 session 0x5637d5567500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637d8514000 session 0x5637d6332000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177586176 unmapped: 73310208 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:24.210916+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637d87ab400 session 0x5637d8d0d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 ms_handle_reset con 0x5637d87ab400 session 0x5637d63ea1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177610752 unmapped: 73285632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3214964 data_alloc: 218103808 data_used: 8280734
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:25.211069+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 414 handle_osd_map epochs [415,415], i have 414, src has [1,415]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177618944 unmapped: 73277440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 415 ms_handle_reset con 0x5637d69ca000 session 0x5637d58b5340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:26.211185+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 415 heartbeat osd_stat(store_statfs(0x4f58bb000/0x0/0x4ffc00000, data 0x4055523/0x4291000, compress 0x0/0x0/0x0, omap 0x5b205, meta 0x6054dfb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:27.211352+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 416 ms_handle_reset con 0x5637d79ac000 session 0x5637d8063500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:28.211483+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 416 ms_handle_reset con 0x5637dc37e000 session 0x5637d89bc380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 416 handle_osd_map epochs [417,417], i have 416, src has [1,417]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 417 ms_handle_reset con 0x5637d8514000 session 0x5637d525d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:29.211646+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 417 heartbeat osd_stat(store_statfs(0x4f58af000/0x0/0x4ffc00000, data 0x4058db7/0x4299000, compress 0x0/0x0/0x0, omap 0x5b5ec, meta 0x6054a14), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229091 data_alloc: 218103808 data_used: 8281022
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 417 handle_osd_map epochs [417,418], i have 417, src has [1,418]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.504879951s of 10.005697250s, submitted: 365
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:30.211825+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d69ca000 session 0x5637d5567180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d8514000 session 0x5637d5790000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d79ac000 session 0x5637d78d2000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:31.211976+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d87ab400 session 0x5637d78d2000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637dc37e000 session 0x5637d8063500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:32.212128+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 heartbeat osd_stat(store_statfs(0x4f58ae000/0x0/0x4ffc00000, data 0x405a896/0x429c000, compress 0x0/0x0/0x0, omap 0x5bb98, meta 0x6054468), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d69ca000 session 0x5637d592c1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:33.212284+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 ms_handle_reset con 0x5637d79ac000 session 0x5637d5228c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d8514000 session 0x5637d8321c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d87ab400 session 0x5637d8d0d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:34.212388+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177577984 unmapped: 73318400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236659 data_alloc: 218103808 data_used: 8281136
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d88e7800 session 0x5637d525d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:35.212508+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177586176 unmapped: 73310208 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:36.212689+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d69ca000 session 0x5637d5790fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176168960 unmapped: 74727424 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d88e7800 session 0x5637d8d0c8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 ms_handle_reset con 0x5637d79ac000 session 0x5637d8618540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:37.212873+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176168960 unmapped: 74727424 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:38.213336+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 heartbeat osd_stat(store_statfs(0x4f58b0000/0x0/0x4ffc00000, data 0x405c3b2/0x429c000, compress 0x0/0x0/0x0, omap 0x5c003, meta 0x6053ffd), peers [0,2] op hist [0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 419 handle_osd_map epochs [420,420], i have 420, src has [1,420]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176177152 unmapped: 74719232 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d8514000 session 0x5637d8251180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d87ab400 session 0x5637d7e8e700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:39.213477+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d51fb000 session 0x5637d5791180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d87ab400 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d69ca000 session 0x5637d525c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176218112 unmapped: 74678272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3236719 data_alloc: 218103808 data_used: 8281635
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:40.213679+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.630165100s of 10.658385277s, submitted: 76
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d79ac000 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176226304 unmapped: 74670080 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d88e7800 session 0x5637d5862700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d8514000 session 0x5637d63df880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:41.214106+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d88e7800 session 0x5637d842c380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 ms_handle_reset con 0x5637d51fb000 session 0x5637d79c8fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:42.214242+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 heartbeat osd_stat(store_statfs(0x4f58ad000/0x0/0x4ffc00000, data 0x405dfa2/0x429f000, compress 0x0/0x0/0x0, omap 0x5c3e1, meta 0x6053c1f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:43.215202+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:44.215892+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3235426 data_alloc: 218103808 data_used: 8281635
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:45.216375+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 420 handle_osd_map epochs [420,421], i have 421, src has [1,421]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f58ad000/0x0/0x4ffc00000, data 0x405dfa2/0x429f000, compress 0x0/0x0/0x0, omap 0x5c3e1, meta 0x6053c1f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:46.216575+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176234496 unmapped: 74661888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 421 ms_handle_reset con 0x5637d79ac000 session 0x5637d85521c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:47.216807+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 74653696 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:48.216951+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 74653696 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:49.217167+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 421 heartbeat osd_stat(store_statfs(0x4f58a9000/0x0/0x4ffc00000, data 0x405fa31/0x42a3000, compress 0x0/0x0/0x0, omap 0x5c213, meta 0x6053ded), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 74653696 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3241486 data_alloc: 218103808 data_used: 8281635
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:50.217320+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 74653696 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:51.217550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 176242688 unmapped: 74653696 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.194580078s of 11.235797882s, submitted: 32
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8612400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 421 ms_handle_reset con 0x5637d8612400 session 0x5637d7cfe1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:52.217733+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 421 handle_osd_map epochs [421,422], i have 421, src has [1,422]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 422 ms_handle_reset con 0x5637d51fb000 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177315840 unmapped: 73580544 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:53.217942+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 423 ms_handle_reset con 0x5637d79ac000 session 0x5637d8a58a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 423 ms_handle_reset con 0x5637d88e7800 session 0x5637d8d0c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 423 ms_handle_reset con 0x5637d87ab400 session 0x5637d5e4b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177348608 unmapped: 73547776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:54.218085+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 423 handle_osd_map epochs [424,424], i have 423, src has [1,424]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 424 ms_handle_reset con 0x5637d8514000 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 424 ms_handle_reset con 0x5637d79ac000 session 0x5637d5263dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 424 heartbeat osd_stat(store_statfs(0x4f589c000/0x0/0x4ffc00000, data 0x4063af3/0x42ae000, compress 0x0/0x0/0x0, omap 0x5c576, meta 0x6053a8a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177364992 unmapped: 73531392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3260070 data_alloc: 218103808 data_used: 8281749
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:55.218228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 424 handle_osd_map epochs [425,425], i have 424, src has [1,425]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 425 ms_handle_reset con 0x5637d87ab400 session 0x5637d57acc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177364992 unmapped: 73531392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:56.218402+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 425 handle_osd_map epochs [425,426], i have 425, src has [1,426]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d88e7800 session 0x5637d57ac700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d51fb000 session 0x5637d57376c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177750016 unmapped: 73146368 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d1000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d57d1000 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:57.218606+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 heartbeat osd_stat(store_statfs(0x4f5492000/0x0/0x4ffc00000, data 0x4468e29/0x46b8000, compress 0x0/0x0/0x0, omap 0x5cb6d, meta 0x6053493), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d79ac000 session 0x5637d592b880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87ab400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d51fb000 session 0x5637d55ba540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d88e7800 session 0x5637d525d880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 ms_handle_reset con 0x5637d5814800 session 0x5637d86181c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 73138176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:58.218868+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 426 handle_osd_map epochs [427,427], i have 426, src has [1,427]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 427 ms_handle_reset con 0x5637d87ab400 session 0x5637d842c380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177758208 unmapped: 73138176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:59.219021+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 427 ms_handle_reset con 0x5637d51fb000 session 0x5637d57ac700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 427 heartbeat osd_stat(store_statfs(0x4f548c000/0x0/0x4ffc00000, data 0x446aa19/0x46bb000, compress 0x0/0x0/0x0, omap 0x5c67e, meta 0x6053982), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 427 ms_handle_reset con 0x5637d5814800 session 0x5637d55bb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177774592 unmapped: 73121792 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3295426 data_alloc: 218103808 data_used: 8676151
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:00.219262+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8517400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 428 ms_handle_reset con 0x5637d8517400 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 428 ms_handle_reset con 0x5637d88e7800 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 73089024 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:01.219447+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 428 ms_handle_reset con 0x5637d79ac000 session 0x5637d58b4fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177807360 unmapped: 73089024 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 428 handle_osd_map epochs [429,429], i have 428, src has [1,429]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.828639030s of 10.092538834s, submitted: 120
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:02.219602+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177725440 unmapped: 73170944 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:03.219786+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 429 heartbeat osd_stat(store_statfs(0x4f508c000/0x0/0x4ffc00000, data 0x486df1b/0x4abe000, compress 0x0/0x0/0x0, omap 0x5c702, meta 0x60538fe), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 430 ms_handle_reset con 0x5637d79ac000 session 0x5637d6361180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 430 ms_handle_reset con 0x5637d51fb000 session 0x5637d8552e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177741824 unmapped: 73154560 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 430 ms_handle_reset con 0x5637d5814800 session 0x5637d57901c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:04.220014+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8517400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 430 ms_handle_reset con 0x5637d8517400 session 0x5637d5228a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177741824 unmapped: 73154560 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3324225 data_alloc: 218103808 data_used: 8681763
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:05.220145+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 430 handle_osd_map epochs [431,431], i have 430, src has [1,431]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177684480 unmapped: 73211904 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:06.220349+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 431 ms_handle_reset con 0x5637d80bb400 session 0x5637d79c9dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 ms_handle_reset con 0x5637d51fb000 session 0x5637d63df880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 ms_handle_reset con 0x5637d5814800 session 0x5637d63de380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 ms_handle_reset con 0x5637d79ac000 session 0x5637d856a1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 177709056 unmapped: 73187328 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:07.220533+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 ms_handle_reset con 0x5637d80bb400 session 0x5637d6332a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8517400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 ms_handle_reset con 0x5637d8517400 session 0x5637d5737340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d88e7800 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d51fb000 session 0x5637d5791c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d5814800 session 0x5637d842d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178765824 unmapped: 72130560 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:08.220807+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178782208 unmapped: 72114176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d79ac000 session 0x5637d7e8e8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:09.220932+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 heartbeat osd_stat(store_statfs(0x4f5079000/0x0/0x4ffc00000, data 0x4874fa3/0x4acc000, compress 0x0/0x0/0x0, omap 0x6c9c2, meta 0x604363e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d80bb400 session 0x5637d5e4a380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 ms_handle_reset con 0x5637d80bb400 session 0x5637d8552540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178790400 unmapped: 72105984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3335882 data_alloc: 218103808 data_used: 8681763
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:10.221135+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 434 ms_handle_reset con 0x5637d51fb000 session 0x5637d8553a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 434 ms_handle_reset con 0x5637d5814800 session 0x5637d5737dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 434 heartbeat osd_stat(store_statfs(0x4f507c000/0x0/0x4ffc00000, data 0x48769e8/0x4acd000, compress 0x0/0x0/0x0, omap 0x6cfb7, meta 0x6043049), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 178798592 unmapped: 72097792 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:11.221338+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 434 handle_osd_map epochs [434,435], i have 434, src has [1,435]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d88e7800 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d79ac000 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d886f800 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d79ac000 session 0x5637d8d0c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179871744 unmapped: 71024640 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:12.221517+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d51fb000 session 0x5637d5566fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179879936 unmapped: 71016448 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.688666344s of 10.987311363s, submitted: 156
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d5814800 session 0x5637d525ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:13.221758+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179879936 unmapped: 71016448 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 ms_handle_reset con 0x5637d80bb400 session 0x5637d856b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:14.222122+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2401.6 total, 600.0 interval
                                           Cumulative writes: 27K writes, 107K keys, 27K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s
                                           Cumulative WAL: 27K writes, 9948 syncs, 2.76 writes per sync, written: 0.06 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 13K writes, 57K keys, 13K commit groups, 1.0 writes per commit group, ingest: 28.01 MB, 0.05 MB/s
                                           Interval WAL: 13K writes, 5800 syncs, 2.33 writes per sync, written: 0.03 GB, 0.05 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179888128 unmapped: 71008256 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:15.222245+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3343507 data_alloc: 218103808 data_used: 8682035
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 heartbeat osd_stat(store_statfs(0x4f507c000/0x0/0x4ffc00000, data 0x48785f4/0x4ad0000, compress 0x0/0x0/0x0, omap 0x6c4a6, meta 0x6043b5a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 435 handle_osd_map epochs [435,436], i have 436, src has [1,436]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179904512 unmapped: 70991872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:16.222364+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 436 ms_handle_reset con 0x5637d5814800 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 436 heartbeat osd_stat(store_statfs(0x4f5077000/0x0/0x4ffc00000, data 0x487a0c7/0x4ad3000, compress 0x0/0x0/0x0, omap 0x6c56f, meta 0x6043a91), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 70983680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:17.222507+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 436 handle_osd_map epochs [436,437], i have 436, src has [1,437]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 70983680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 437 ms_handle_reset con 0x5637d886f800 session 0x5637d6097880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:18.222682+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 70983680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:19.222864+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 437 handle_osd_map epochs [438,438], i have 437, src has [1,438]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 438 ms_handle_reset con 0x5637d79ac000 session 0x5637d842d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 438 ms_handle_reset con 0x5637d88e7800 session 0x5637d79c9dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 438 ms_handle_reset con 0x5637d51fb000 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 438 ms_handle_reset con 0x5637dc37f400 session 0x5637d8552e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 70983680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:20.223059+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3360131 data_alloc: 218103808 data_used: 8682465
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179912704 unmapped: 70983680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:21.223222+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d51fb000 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d5814800 session 0x5637d7e8e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179937280 unmapped: 70959104 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 heartbeat osd_stat(store_statfs(0x4f506a000/0x0/0x4ffc00000, data 0x487f87e/0x4ae0000, compress 0x0/0x0/0x0, omap 0x6c67b, meta 0x6043985), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:22.223338+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d79ac000 session 0x5637d63ea380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d88e7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d88e7800 session 0x5637d89bda40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d886f800 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179986432 unmapped: 70909952 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:23.223468+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.311353683s of 10.673677444s, submitted: 98
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d51fb000 session 0x5637d63de700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179986432 unmapped: 70909952 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:24.223626+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d69ca000 session 0x5637d5229880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 ms_handle_reset con 0x5637d79ac000 session 0x5637d85536c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 440 handle_osd_map epochs [440,441], i have 441, src has [1,441]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 441 ms_handle_reset con 0x5637d5814800 session 0x5637d636e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180002816 unmapped: 70893568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:25.223772+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3371366 data_alloc: 218103808 data_used: 9088051
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 441 ms_handle_reset con 0x5637d79ac000 session 0x5637d3bf5340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 ms_handle_reset con 0x5637d69ca000 session 0x5637d57addc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180002816 unmapped: 70893568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 ms_handle_reset con 0x5637d886f800 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:26.223915+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 ms_handle_reset con 0x5637d51fb000 session 0x5637d6361c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 ms_handle_reset con 0x5637d8384400 session 0x5637d89bd180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 ms_handle_reset con 0x5637dc37f400 session 0x5637d6345180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 heartbeat osd_stat(store_statfs(0x4f5060000/0x0/0x4ffc00000, data 0x4884d04/0x4aec000, compress 0x0/0x0/0x0, omap 0x6c430, meta 0x6043bd0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180011008 unmapped: 70885376 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:27.224148+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180019200 unmapped: 70877184 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:28.224302+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 handle_osd_map epochs [444,444], i have 442, src has [1,444]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 442 handle_osd_map epochs [443,444], i have 442, src has [1,444]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180043776 unmapped: 70852608 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d8384400 session 0x5637d842c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:29.224435+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79ac000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d51fb000 session 0x5637d57adc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d69ca000 session 0x5637d63de380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d79ac000 session 0x5637d58b4fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d51fb000 session 0x5637d651f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180060160 unmapped: 70836224 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:30.224659+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d69ca000 session 0x5637d5736fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3339382 data_alloc: 218103808 data_used: 8718952
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d8384400 session 0x5637d6332a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637dc37f400 session 0x5637d5228a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 ms_handle_reset con 0x5637d886f800 session 0x5637d5790000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 444 handle_osd_map epochs [444,445], i have 444, src has [1,445]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179879936 unmapped: 71016448 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 ms_handle_reset con 0x5637d886f800 session 0x5637d651e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:31.224829+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 ms_handle_reset con 0x5637d51fb000 session 0x5637d85536c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 heartbeat osd_stat(store_statfs(0x4f585b000/0x0/0x4ffc00000, data 0x4088417/0x42f1000, compress 0x0/0x0/0x0, omap 0x6c6ce, meta 0x6043932), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 ms_handle_reset con 0x5637d69ca000 session 0x5637d63de700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 ms_handle_reset con 0x5637d8384400 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 445 handle_osd_map epochs [445,446], i have 445, src has [1,446]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179904512 unmapped: 70991872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:32.225023+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 446 ms_handle_reset con 0x5637dc37f400 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 446 ms_handle_reset con 0x5637dc37f400 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179904512 unmapped: 70991872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:33.225270+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 ms_handle_reset con 0x5637d51fb000 session 0x5637d58b4fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 ms_handle_reset con 0x5637d69ca000 session 0x5637d6345500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 ms_handle_reset con 0x5637d8384400 session 0x5637d5566c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.956144333s of 10.217906952s, submitted: 136
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87acc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 ms_handle_reset con 0x5637d87acc00 session 0x5637d6345180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179920896 unmapped: 70975488 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:34.225386+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 heartbeat osd_stat(store_statfs(0x4f5855000/0x0/0x4ffc00000, data 0x408cdc6/0x42f5000, compress 0x0/0x0/0x0, omap 0x6ccc3, meta 0x604333d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179920896 unmapped: 70975488 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:35.225541+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3346947 data_alloc: 218103808 data_used: 8718823
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 447 handle_osd_map epochs [448,448], i have 447, src has [1,448]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 448 ms_handle_reset con 0x5637d51fb000 session 0x5637d63ea380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 448 ms_handle_reset con 0x5637d886f800 session 0x5637d5263dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179937280 unmapped: 70959104 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:36.225708+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 448 ms_handle_reset con 0x5637d8384400 session 0x5637d5737880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 448 ms_handle_reset con 0x5637dbfc9400 session 0x5637d856bdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 449 ms_handle_reset con 0x5637d69ca000 session 0x5637d636e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 449 ms_handle_reset con 0x5637d51fb000 session 0x5637d6332a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 70918144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:37.225844+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 449 handle_osd_map epochs [449,450], i have 449, src has [1,450]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 ms_handle_reset con 0x5637d851a400 session 0x5637d55676c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 ms_handle_reset con 0x5637dc37f400 session 0x5637d5228000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 70918144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:38.225991+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 ms_handle_reset con 0x5637d69ca000 session 0x5637d592ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 ms_handle_reset con 0x5637d8384400 session 0x5637d648d180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 70918144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:39.226216+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179978240 unmapped: 70918144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:40.226488+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 ms_handle_reset con 0x5637d8384400 session 0x5637d5567c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3354786 data_alloc: 218103808 data_used: 8718709
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 heartbeat osd_stat(store_statfs(0x4f5850000/0x0/0x4ffc00000, data 0x409226b/0x42fc000, compress 0x0/0x0/0x0, omap 0x6cf61, meta 0x604309f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 179994624 unmapped: 70901760 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:41.226654+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 450 handle_osd_map epochs [451,451], i have 450, src has [1,451]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 451 ms_handle_reset con 0x5637d51fb000 session 0x5637d78d6e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 452 ms_handle_reset con 0x5637d69ca000 session 0x5637d651f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 70828032 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:42.227269+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180068352 unmapped: 70828032 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 452 ms_handle_reset con 0x5637dc37f400 session 0x5637d592d340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 452 heartbeat osd_stat(store_statfs(0x4f5845000/0x0/0x4ffc00000, data 0x4095998/0x4303000, compress 0x0/0x0/0x0, omap 0x6d136, meta 0x6042eca), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 452 ms_handle_reset con 0x5637d851a400 session 0x5637d89bda40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:43.227404+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180076544 unmapped: 70819840 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:44.227542+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.107464790s of 10.333559036s, submitted: 147
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 452 handle_osd_map epochs [453,453], i have 452, src has [1,453]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 ms_handle_reset con 0x5637d51fb000 session 0x5637d5f65340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 ms_handle_reset con 0x5637d69ca000 session 0x5637d6332c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 70778880 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:45.227688+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3370349 data_alloc: 218103808 data_used: 8718807
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 heartbeat osd_stat(store_statfs(0x4f5848000/0x0/0x4ffc00000, data 0x40959a8/0x4304000, compress 0x0/0x0/0x0, omap 0x6c8f6, meta 0x604370a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 ms_handle_reset con 0x5637dc37f400 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 ms_handle_reset con 0x5637d8384400 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180117504 unmapped: 70778880 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 454 ms_handle_reset con 0x5637d886f800 session 0x5637d842c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:46.227865+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 454 ms_handle_reset con 0x5637d851a400 session 0x5637d5566700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 454 ms_handle_reset con 0x5637d886f800 session 0x5637d79c9340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 454 ms_handle_reset con 0x5637d51fb000 session 0x5637d5f656c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180158464 unmapped: 70737920 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:47.228092+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 ms_handle_reset con 0x5637d69ca000 session 0x5637d78d3a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180174848 unmapped: 70721536 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 ms_handle_reset con 0x5637d8384400 session 0x5637d6344000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:48.228228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 ms_handle_reset con 0x5637d8384400 session 0x5637d8d0c540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 ms_handle_reset con 0x5637d51fb000 session 0x5637d8062a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 heartbeat osd_stat(store_statfs(0x4f583b000/0x0/0x4ffc00000, data 0x40996ca/0x430d000, compress 0x0/0x0/0x0, omap 0x6c625, meta 0x60439db), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 456 ms_handle_reset con 0x5637d69ca000 session 0x5637d85521c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180215808 unmapped: 70680576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:49.228387+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 456 ms_handle_reset con 0x5637d851a400 session 0x5637d55661c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 456 ms_handle_reset con 0x5637d886f800 session 0x5637d592b340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 456 ms_handle_reset con 0x5637d51fb000 session 0x5637d76bfdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 456 handle_osd_map epochs [456,457], i have 456, src has [1,457]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 457 ms_handle_reset con 0x5637d69ca000 session 0x5637d6332c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180240384 unmapped: 70656000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:50.228548+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 457 ms_handle_reset con 0x5637d886f800 session 0x5637d5567c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3380998 data_alloc: 218103808 data_used: 8719939
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 457 ms_handle_reset con 0x5637d8384400 session 0x5637d5863880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 457 ms_handle_reset con 0x5637dc37f400 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637dc37f400 session 0x5637d648ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180273152 unmapped: 70623232 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:51.228645+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637d851a400 session 0x5637d55bb6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180289536 unmapped: 70606848 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:52.228825+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637d51fb000 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637d69ca000 session 0x5637d78d2e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180289536 unmapped: 70606848 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:53.228977+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180289536 unmapped: 70606848 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:54.229096+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 heartbeat osd_stat(store_statfs(0x4f5837000/0x0/0x4ffc00000, data 0x40a00a4/0x4313000, compress 0x0/0x0/0x0, omap 0x6bb14, meta 0x60444ec), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.527160645s of 10.267199516s, submitted: 203
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637d8384400 session 0x5637d89bca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 ms_handle_reset con 0x5637d8384400 session 0x5637d5567340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180305920 unmapped: 70590464 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:55.229232+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3383514 data_alloc: 218103808 data_used: 8719923
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180314112 unmapped: 70582272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:56.229351+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 459 heartbeat osd_stat(store_statfs(0x4f5838000/0x0/0x4ffc00000, data 0x40a0106/0x4314000, compress 0x0/0x0/0x0, omap 0x6bc20, meta 0x60443e0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 459 handle_osd_map epochs [459,460], i have 459, src has [1,460]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 460 ms_handle_reset con 0x5637d851a400 session 0x5637d8552fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 460 ms_handle_reset con 0x5637d51fb000 session 0x5637d5228380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 460 heartbeat osd_stat(store_statfs(0x4f5833000/0x0/0x4ffc00000, data 0x40a1ba1/0x4317000, compress 0x0/0x0/0x0, omap 0x6bca6, meta 0x604435a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180330496 unmapped: 70565888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:57.229470+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 461 ms_handle_reset con 0x5637dc37f400 session 0x5637d5f42700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180346880 unmapped: 70549504 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:58.229586+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d886f800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 ms_handle_reset con 0x5637d886f800 session 0x5637d82501c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180355072 unmapped: 70541312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:59.229695+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 ms_handle_reset con 0x5637d69ca000 session 0x5637d52281c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 ms_handle_reset con 0x5637d8384400 session 0x5637d648c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 ms_handle_reset con 0x5637d51fb000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 heartbeat osd_stat(store_statfs(0x4f5829000/0x0/0x4ffc00000, data 0x40a6f9d/0x4323000, compress 0x0/0x0/0x0, omap 0x6c18f, meta 0x6043e71), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180355072 unmapped: 70541312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:00.229856+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3399534 data_alloc: 218103808 data_used: 8720021
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:01.229995+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180363264 unmapped: 70533120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 462 handle_osd_map epochs [463,463], i have 462, src has [1,463]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:02.230125+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180379648 unmapped: 70516736 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 463 ms_handle_reset con 0x5637d851a400 session 0x5637d8321a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:03.230250+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180387840 unmapped: 70508544 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:04.230387+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180387840 unmapped: 70508544 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 463 ms_handle_reset con 0x5637dbfc9400 session 0x5637d5228a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 463 handle_osd_map epochs [463,464], i have 464, src has [1,464]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.740383148s of 10.061109543s, submitted: 46
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:05.230642+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180404224 unmapped: 70492160 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f5821000/0x0/0x4ffc00000, data 0x40aa60c/0x4329000, compress 0x0/0x0/0x0, omap 0x6c1d2, meta 0x6043e2e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406042 data_alloc: 218103808 data_used: 8720976
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 ms_handle_reset con 0x5637dc37f400 session 0x5637d5567180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:06.230766+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 70483968 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 ms_handle_reset con 0x5637dbfc9400 session 0x5637d842d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 ms_handle_reset con 0x5637d69ca000 session 0x5637d5790fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:07.230926+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180412416 unmapped: 70483968 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f5824000/0x0/0x4ffc00000, data 0x40aa5aa/0x4328000, compress 0x0/0x0/0x0, omap 0x6c1d2, meta 0x6043e2e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 heartbeat osd_stat(store_statfs(0x4f5825000/0x0/0x4ffc00000, data 0x40aa548/0x4327000, compress 0x0/0x0/0x0, omap 0x6c1d2, meta 0x6043e2e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 464 handle_osd_map epochs [464,465], i have 464, src has [1,465]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 ms_handle_reset con 0x5637d851a400 session 0x5637d5f64380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 ms_handle_reset con 0x5637d8384400 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:08.231068+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180436992 unmapped: 70459392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 ms_handle_reset con 0x5637d5eca800 session 0x5637d57901c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 ms_handle_reset con 0x5637d51fb000 session 0x5637d5791180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:09.231216+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180436992 unmapped: 70459392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:10.231439+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180436992 unmapped: 70459392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3406130 data_alloc: 218103808 data_used: 8720796
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:11.231649+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180436992 unmapped: 70459392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 heartbeat osd_stat(store_statfs(0x4f5823000/0x0/0x4ffc00000, data 0x40ac0f2/0x4329000, compress 0x0/0x0/0x0, omap 0x6c741, meta 0x60438bf), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:12.231817+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180445184 unmapped: 70451200 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 466 ms_handle_reset con 0x5637d69ca000 session 0x5637d7e8f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:13.231968+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 70434816 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:14.232386+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 70434816 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.827837944s of 10.016807556s, submitted: 77
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 467 ms_handle_reset con 0x5637d851a400 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:15.232729+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180502528 unmapped: 70393856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3415843 data_alloc: 218103808 data_used: 8720894
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:16.232927+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180518912 unmapped: 70377472 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 467 heartbeat osd_stat(store_statfs(0x4f5819000/0x0/0x4ffc00000, data 0x40af809/0x4331000, compress 0x0/0x0/0x0, omap 0x6c8d3, meta 0x604372d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 467 ms_handle_reset con 0x5637dbfc9400 session 0x5637d89bc380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 467 ms_handle_reset con 0x5637dbfc9400 session 0x5637d6332380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:17.233113+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180527104 unmapped: 70369280 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 ms_handle_reset con 0x5637d51fb000 session 0x5637d525c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:18.233254+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180576256 unmapped: 70320128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 ms_handle_reset con 0x5637d5eca800 session 0x5637d5228380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:19.233390+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 ms_handle_reset con 0x5637d69ca000 session 0x5637d5567180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:20.233610+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417261 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 ms_handle_reset con 0x5637d851a400 session 0x5637d8250540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f5819000/0x0/0x4ffc00000, data 0x40b1278/0x4333000, compress 0x0/0x0/0x0, omap 0x6cb71, meta 0x604348f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:21.233734+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:22.233904+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:23.234052+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:24.234238+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f5819000/0x0/0x4ffc00000, data 0x40b1278/0x4333000, compress 0x0/0x0/0x0, omap 0x6cb71, meta 0x604348f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:25.234423+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3417261 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:26.234545+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:27.234665+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.408284187s of 13.097046852s, submitted: 115
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 ms_handle_reset con 0x5637d51fb000 session 0x5637d842ca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:28.234808+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 heartbeat osd_stat(store_statfs(0x4f5819000/0x0/0x4ffc00000, data 0x40b1278/0x4333000, compress 0x0/0x0/0x0, omap 0x6cbf7, meta 0x6043409), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:29.235023+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180600832 unmapped: 70295552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 469 ms_handle_reset con 0x5637d5eca800 session 0x5637d592c1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:30.235323+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180609024 unmapped: 70287360 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420755 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:31.235460+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 70279168 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 469 heartbeat osd_stat(store_statfs(0x4f5814000/0x0/0x4ffc00000, data 0x40b2e14/0x4336000, compress 0x0/0x0/0x0, omap 0x6d05a, meta 0x6042fa6), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:32.235571+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180617216 unmapped: 70279168 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 469 ms_handle_reset con 0x5637d69ca000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 469 ms_handle_reset con 0x5637dbfc9400 session 0x5637d58b5340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:33.235809+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 470 ms_handle_reset con 0x5637dc37f400 session 0x5637d5f65500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180641792 unmapped: 70254592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 470 ms_handle_reset con 0x5637dc37f400 session 0x5637d82501c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:34.236010+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180641792 unmapped: 70254592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 470 ms_handle_reset con 0x5637d51fb000 session 0x5637d8062a80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:35.236174+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180641792 unmapped: 70254592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3423529 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:36.236349+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180641792 unmapped: 70254592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 470 heartbeat osd_stat(store_statfs(0x4f5811000/0x0/0x4ffc00000, data 0x40b4a04/0x4339000, compress 0x0/0x0/0x0, omap 0x6d166, meta 0x6042e9a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 471 ms_handle_reset con 0x5637d5eca800 session 0x5637d856b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:37.236503+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 471 heartbeat osd_stat(store_statfs(0x4f580c000/0x0/0x4ffc00000, data 0x40b65bc/0x433c000, compress 0x0/0x0/0x0, omap 0x6d166, meta 0x6042e9a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180682752 unmapped: 70213632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 472 ms_handle_reset con 0x5637d69ca000 session 0x5637d6096700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:38.236721+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180682752 unmapped: 70213632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 472 heartbeat osd_stat(store_statfs(0x4f5809000/0x0/0x4ffc00000, data 0x40b803b/0x433f000, compress 0x0/0x0/0x0, omap 0x6d60c, meta 0x60429f4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:39.236868+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180690944 unmapped: 70205440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 472 handle_osd_map epochs [473,473], i have 472, src has [1,473]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.005830765s of 12.057798386s, submitted: 34
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:40.237090+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431851 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 ms_handle_reset con 0x5637dbfc9400 session 0x5637d592ba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:41.237283+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:42.237461+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:43.237665+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 ms_handle_reset con 0x5637dbfc9400 session 0x5637d89bda40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:44.237866+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f5808000/0x0/0x4ffc00000, data 0x40b9c2b/0x4342000, compress 0x0/0x0/0x0, omap 0x6d60c, meta 0x60429f4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:45.238132+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3431851 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f5808000/0x0/0x4ffc00000, data 0x40b9c2b/0x4342000, compress 0x0/0x0/0x0, omap 0x6d692, meta 0x604296e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:46.238301+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 ms_handle_reset con 0x5637d51fb000 session 0x5637d651f500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 heartbeat osd_stat(store_statfs(0x4f5808000/0x0/0x4ffc00000, data 0x40b9c2b/0x4342000, compress 0x0/0x0/0x0, omap 0x6d692, meta 0x604296e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:47.238441+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:48.238587+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180715520 unmapped: 70180864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 473 handle_osd_map epochs [473,474], i have 474, src has [1,474]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:49.238784+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180740096 unmapped: 70156288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f5805000/0x0/0x4ffc00000, data 0x40bb6aa/0x4345000, compress 0x0/0x0/0x0, omap 0x6d718, meta 0x60428e8), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:50.239008+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.367196083s of 10.483336449s, submitted: 15
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180772864 unmapped: 70123520 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3434625 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:51.239186+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180781056 unmapped: 70115328 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 474 ms_handle_reset con 0x5637d5eca800 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:52.239368+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180781056 unmapped: 70115328 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 474 heartbeat osd_stat(store_statfs(0x4f5807000/0x0/0x4ffc00000, data 0x40bb6aa/0x4345000, compress 0x0/0x0/0x0, omap 0x6d9f9, meta 0x6042607), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:53.239498+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180781056 unmapped: 70115328 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 475 ms_handle_reset con 0x5637d69ca000 session 0x5637d636e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:54.239683+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180797440 unmapped: 70098944 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f5801000/0x0/0x4ffc00000, data 0x40bd2a8/0x4349000, compress 0x0/0x0/0x0, omap 0x6dac2, meta 0x604253e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 475 ms_handle_reset con 0x5637dc37f400 session 0x5637d5863500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 475 heartbeat osd_stat(store_statfs(0x4f5801000/0x0/0x4ffc00000, data 0x40bd2a8/0x4349000, compress 0x0/0x0/0x0, omap 0x6dac2, meta 0x604253e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:55.239888+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180797440 unmapped: 70098944 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 ms_handle_reset con 0x5637dc37f400 session 0x5637d5791a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3443192 data_alloc: 218103808 data_used: 8721561
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:56.240142+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 ms_handle_reset con 0x5637d51fb000 session 0x5637d5566700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180436992 unmapped: 70459392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 ms_handle_reset con 0x5637d5eca800 session 0x5637d8d0ce00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:57.240353+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180453376 unmapped: 70443008 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 heartbeat osd_stat(store_statfs(0x4f5801000/0x0/0x4ffc00000, data 0x40bee36/0x434b000, compress 0x0/0x0/0x0, omap 0x6e41e, meta 0x6041be2), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:58.240558+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180453376 unmapped: 70443008 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:59.240756+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180453376 unmapped: 70443008 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 ms_handle_reset con 0x5637d69ca000 session 0x5637d63de700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:00.240960+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180453376 unmapped: 70443008 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3442248 data_alloc: 218103808 data_used: 8722174
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc9400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.632107735s of 10.919322014s, submitted: 77
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:01.241090+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180453376 unmapped: 70443008 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 477 ms_handle_reset con 0x5637dbfc9400 session 0x5637d79c8c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:02.241243+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 180461568 unmapped: 70434816 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 477 heartbeat osd_stat(store_statfs(0x4f57fb000/0x0/0x4ffc00000, data 0x40c0a50/0x434f000, compress 0x0/0x0/0x0, omap 0x6e4a4, meta 0x6041b5c), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 477 ms_handle_reset con 0x5637d51fb000 session 0x5637d6345500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 477 handle_osd_map epochs [477,478], i have 478, src has [1,478]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 ms_handle_reset con 0x5637d5eca800 session 0x5637d592a000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:03.241384+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181182464 unmapped: 69713920 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:04.241559+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 ms_handle_reset con 0x5637d69ca000 session 0x5637d5791a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181182464 unmapped: 69713920 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 ms_handle_reset con 0x5637dc37f400 session 0x5637d636e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 heartbeat osd_stat(store_statfs(0x4f57f8000/0x0/0x4ffc00000, data 0x40c2678/0x4352000, compress 0x0/0x0/0x0, omap 0x6e52a, meta 0x6041ad6), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:05.241709+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447903 data_alloc: 218103808 data_used: 8722759
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8384c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 ms_handle_reset con 0x5637d8384c00 session 0x5637d63ea380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:06.241870+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:07.242006+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 479 ms_handle_reset con 0x5637d51fb000 session 0x5637d7e8f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:08.242134+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 479 handle_osd_map epochs [480,480], i have 479, src has [1,480]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 480 ms_handle_reset con 0x5637d5eca800 session 0x5637d525c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:09.242310+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 480 ms_handle_reset con 0x5637d69ca000 session 0x5637d58b5340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:10.242537+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 480 heartbeat osd_stat(store_statfs(0x4f57f3000/0x0/0x4ffc00000, data 0x40c5c4d/0x4357000, compress 0x0/0x0/0x0, omap 0x6eb1f, meta 0x60414e1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 480 handle_osd_map epochs [481,481], i have 480, src has [1,481]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 480 handle_osd_map epochs [481,481], i have 481, src has [1,481]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 69681152 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3457671 data_alloc: 218103808 data_used: 8722759
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:11.242706+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181215232 unmapped: 69681152 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637dc37f400 session 0x5637d78d36c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fbc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.916933060s of 11.016059875s, submitted: 74
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:12.242827+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637d51fbc00 session 0x5637d856b500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181223424 unmapped: 69672960 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 heartbeat osd_stat(store_statfs(0x4f57ee000/0x0/0x4ffc00000, data 0x40c783d/0x435a000, compress 0x0/0x0/0x0, omap 0x6eb1f, meta 0x60414e1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637d51fb000 session 0x5637d8320e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:13.242963+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 69746688 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:14.243162+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181149696 unmapped: 69746688 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637d5eca800 session 0x5637d89bd880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637d69ca000 session 0x5637d57addc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:15.243372+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181157888 unmapped: 69738496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456615 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:16.243527+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637dc37f400 session 0x5637d592ba40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181157888 unmapped: 69738496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:17.243702+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181157888 unmapped: 69738496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 ms_handle_reset con 0x5637d7c73000 session 0x5637d842ca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:18.243855+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57f1000/0x0/0x4ffc00000, data 0x40c789f/0x435b000, compress 0x0/0x0/0x0, omap 0x6ed37, meta 0x60412c9), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181174272 unmapped: 69722112 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d7c73000 session 0x5637d89bc380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:19.243982+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d51fb000 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:20.244192+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181190656 unmapped: 69705728 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463569 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ed000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6ec35, meta 0x60413cb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d5eca800 session 0x5637d842ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:21.244326+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d69ca000 session 0x5637d5791180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181231616 unmapped: 69664768 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6ec35, meta 0x60413cb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:22.244490+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f09b, meta 0x6040f65), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.293224335s of 10.222789764s, submitted: 63
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637dc37f400 session 0x5637d5566c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181231616 unmapped: 69664768 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:23.244631+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181231616 unmapped: 69664768 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637dc37f400 session 0x5637d8a59500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d51fb000 session 0x5637d76bfdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:24.244811+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:25.244956+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:26.245078+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:27.245207+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:28.245342+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:29.245550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:30.245729+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:31.245896+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:32.246104+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:33.246420+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:34.246577+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:35.246713+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:36.246922+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:37.247113+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:38.247301+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:39.247447+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:40.247656+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:41.247830+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:42.247997+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:43.248119+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181239808 unmapped: 69656576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:44.248276+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 69648384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:45.248478+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 69648384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:46.248628+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 69648384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:47.248754+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181248000 unmapped: 69648384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:48.248886+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181256192 unmapped: 69640192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:49.249016+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181256192 unmapped: 69640192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:50.249200+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181256192 unmapped: 69640192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:51.249455+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181264384 unmapped: 69632000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:52.249602+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ef000/0x0/0x4ffc00000, data 0x40c92bc/0x435d000, compress 0x0/0x0/0x0, omap 0x6f1a7, meta 0x6040e59), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:53.249782+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:54.249918+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:55.250124+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3463237 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:56.250311+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 34.463130951s of 34.496303558s, submitted: 19
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 ms_handle_reset con 0x5637d5eca800 session 0x5637d8250700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:57.250451+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:58.250595+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 heartbeat osd_stat(store_statfs(0x4f57ee000/0x0/0x4ffc00000, data 0x40c931e/0x435e000, compress 0x0/0x0/0x0, omap 0x6f488, meta 0x6040b78), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181272576 unmapped: 69623808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:59.250736+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d69ca000 session 0x5637d5736fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181297152 unmapped: 69599232 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8385400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:00.250894+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d8385400 session 0x5637d82501c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d7c73000 session 0x5637d63de700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 181313536 unmapped: 69582848 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3472429 data_alloc: 218103808 data_used: 8723372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:01.251082+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d51fb000 session 0x5637d525d500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d5eca800 session 0x5637d636fdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 68517888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:02.251228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 182378496 unmapped: 68517888 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc37f400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d69ca000 session 0x5637d8552fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637dc37f400 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 heartbeat osd_stat(store_statfs(0x4f57ea000/0x0/0x4ffc00000, data 0x40caec9/0x4362000, compress 0x0/0x0/0x0, omap 0x6f38c, meta 0x6040c74), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 ms_handle_reset con 0x5637d69ca000 session 0x5637d8d0c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:03.251418+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 483 handle_osd_map epochs [483,484], i have 484, src has [1,484]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 ms_handle_reset con 0x5637d5eca800 session 0x5637d8618540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 ms_handle_reset con 0x5637d7c73000 session 0x5637d6360fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f57ea000/0x0/0x4ffc00000, data 0x40caec9/0x4362000, compress 0x0/0x0/0x0, omap 0x6f3cf, meta 0x6040c31), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 ms_handle_reset con 0x5637d51fb000 session 0x5637d5863500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183025664 unmapped: 67870720 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 ms_handle_reset con 0x5637d8515000 session 0x5637d76be1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 ms_handle_reset con 0x5637d51fb000 session 0x5637d525c000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:04.251588+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183033856 unmapped: 67862528 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:05.251750+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183033856 unmapped: 67862528 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3470819 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:06.251926+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183033856 unmapped: 67862528 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:07.252100+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183033856 unmapped: 67862528 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:08.252235+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 heartbeat osd_stat(store_statfs(0x4f57e7000/0x0/0x4ffc00000, data 0x40cca48/0x4363000, compress 0x0/0x0/0x0, omap 0x6f002, meta 0x6040ffe), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.975301743s of 11.530840874s, submitted: 113
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183050240 unmapped: 67846144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:09.252369+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183050240 unmapped: 67846144 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:10.252536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 67837952 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474653 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:11.252710+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 67837952 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:12.252846+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5eca800 session 0x5637d8552c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183058432 unmapped: 67837952 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:13.252982+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f57e6000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6f571, meta 0x6040a8f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 67829760 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:14.253125+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 67829760 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:15.253282+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183066624 unmapped: 67829760 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f57e6000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6f571, meta 0x6040a8f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474430 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:16.253472+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:17.253674+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:18.254083+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:19.254235+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:20.254545+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474430 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:21.254720+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f57e6000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6f571, meta 0x6040a8f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:22.254890+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f57e6000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6f571, meta 0x6040a8f), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:23.255109+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183074816 unmapped: 67821568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.555403709s of 15.575470924s, submitted: 19
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d69ca000 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c73000 session 0x5637d5863c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:24.255243+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200171520 unmapped: 50724864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:25.255379+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191791104 unmapped: 59105280 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704458 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:26.255567+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2d9f000/0x0/0x4ffc00000, data 0x6b154c7/0x6dad000, compress 0x0/0x0/0x0, omap 0x6f3d9, meta 0x6040c27), peers [0,2] op hist [0,0,0,0,0,0,1,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d851a800 session 0x5637d6345500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 67469312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d851a800 session 0x5637d85528c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:27.255693+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 67469312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:28.255830+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 67469312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:29.256019+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 67469312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:30.256303+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183427072 unmapped: 67469312 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744782 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:31.257129+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6f3d9, meta 0x6040c27), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183435264 unmapped: 67461120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:32.258296+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183435264 unmapped: 67461120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:33.258502+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183435264 unmapped: 67461120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:34.258625+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183435264 unmapped: 67461120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:35.258875+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183435264 unmapped: 67461120 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3744782 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:36.259078+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6f3d9, meta 0x6040c27), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 67452928 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:37.259314+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6f3d9, meta 0x6040c27), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183443456 unmapped: 67452928 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:38.259478+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183451648 unmapped: 67444736 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:39.259630+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.512770653s of 15.711600304s, submitted: 32
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183459840 unmapped: 67436544 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb000 session 0x5637d5791880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:40.259788+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183468032 unmapped: 67428352 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3746469 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:41.259901+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x60414ed), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183476224 unmapped: 67420160 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:42.260067+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183476224 unmapped: 67420160 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:43.260172+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183476224 unmapped: 67420160 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:44.260288+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:45.260402+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764005 data_alloc: 234881024 data_used: 11257340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:46.260550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:47.260703+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x60414ed), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:48.260860+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:49.261017+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:50.261228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3764005 data_alloc: 234881024 data_used: 11257340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:51.261355+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f259f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x60414ed), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:52.261487+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:53.261592+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:54.261724+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 183500800 unmapped: 67395584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:55.261831+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.951075554s of 16.027555466s, submitted: 6
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195903488 unmapped: 54992896 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3793605 data_alloc: 234881024 data_used: 12284924
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:56.261987+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201023488 unmapped: 49872896 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:57.262683+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201506816 unmapped: 49389568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f0eaf000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x71e14ed), peers [0,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:58.262837+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f0e7f000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x71e14ed), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 193683456 unmapped: 57212928 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:59.262971+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 193683456 unmapped: 57212928 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:00.263198+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 193740800 unmapped: 57155584 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781573 data_alloc: 234881024 data_used: 13108732
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:01.263325+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13cf000/0x0/0x4ffc00000, data 0x73154c7/0x75ad000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x71e14ed), peers [0,2] op hist [0,0,0,0,0,1,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 56090624 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:02.263454+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 193757184 unmapped: 57139200 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:03.263607+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 194805760 unmapped: 56090624 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:04.263747+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 194281472 unmapped: 56614912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:05.263917+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195846144 unmapped: 55050240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 4.046601295s of 10.278201103s, submitted: 159
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3931141 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:06.264086+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5eca800 session 0x5637d636fdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d69ca000 session 0x5637d5263340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa00000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6eb13, meta 0x71e14ed), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195854336 unmapped: 55042048 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c73000 session 0x5637d8251c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:07.264262+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:08.264430+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:09.264637+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:10.264861+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926942 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:11.265107+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:12.265240+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:13.265369+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:14.265486+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:15.265663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926942 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:16.265800+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:17.265993+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:18.266174+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:19.266308+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195592192 unmapped: 55304192 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:20.266463+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926942 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:21.266583+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:22.266766+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:23.266907+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:24.267157+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:25.267379+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195600384 unmapped: 55296000 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3926942 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:26.267518+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195608576 unmapped: 55287808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c73000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c73000 session 0x5637dbdf41c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:27.267646+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195608576 unmapped: 55287808 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:28.267861+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195616768 unmapped: 55279616 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:29.268076+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5eca800 session 0x5637d63eb880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 23.270851135s of 23.310892105s, submitted: 24
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d69ca000 session 0x5637d76bee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd1f000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:30.268274+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:31.268778+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3933057 data_alloc: 234881024 data_used: 13379068
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:32.268950+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:33.269174+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:34.269516+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:35.269691+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:36.269933+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3934593 data_alloc: 234881024 data_used: 13496316
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:37.270105+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:38.270164+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:39.270289+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:40.270426+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:41.270569+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3934593 data_alloc: 234881024 data_used: 13496316
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:42.270704+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195780608 unmapped: 55115776 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:43.270836+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.049685478s of 14.067821503s, submitted: 8
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195796992 unmapped: 55099392 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:44.270989+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 55001088 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:45.271405+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 55001088 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:46.271542+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3956385 data_alloc: 234881024 data_used: 15347708
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 55001088 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:47.271718+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 55001088 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:48.271847+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 195895296 unmapped: 55001088 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:49.272108+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:50.272336+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:51.272516+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3956817 data_alloc: 234881024 data_used: 15344636
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:52.272767+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:53.272948+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:54.273097+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:55.273263+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.849946976s of 11.866361618s, submitted: 7
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:56.273429+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3956241 data_alloc: 234881024 data_used: 15344636
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:57.273636+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196009984 unmapped: 54886400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:58.273762+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa0b000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196042752 unmapped: 54853632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:59.273981+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196042752 unmapped: 54853632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:00.274256+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 54820864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:01.274444+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3956337 data_alloc: 234881024 data_used: 15315964
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 54820864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:02.274654+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa03000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 54820864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:03.274839+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 54820864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:04.275020+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa03000/0x0/0x4ffc00000, data 0x8d084d7/0x8fa1000, compress 0x0/0x0/0x0, omap 0x6e6b0, meta 0x71e1950), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196075520 unmapped: 54820864 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:05.275244+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196083712 unmapped: 54812672 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:06.275437+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3956337 data_alloc: 234881024 data_used: 15315964
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196083712 unmapped: 54812672 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:07.275616+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.331003189s of 12.350900650s, submitted: 9
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d851a800 session 0x5637d6096700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637dbd1f000 session 0x5637d5790380
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d851a800 session 0x5637d6332c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196476928 unmapped: 54419456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:08.275789+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efa30000/0x0/0x4ffc00000, data 0x8ce44c7/0x8f7c000, compress 0x0/0x0/0x0, omap 0x6e24d, meta 0x71e1db3), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196476928 unmapped: 54419456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:09.275934+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196476928 unmapped: 54419456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:10.276121+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196476928 unmapped: 54419456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:11.276366+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3952544 data_alloc: 234881024 data_used: 16456700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 196476928 unmapped: 54419456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:12.276536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb000 session 0x5637d5736fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5eca800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 192012288 unmapped: 58884096 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:13.276664+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5eca800 session 0x5637d6332000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4646000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6e2d3, meta 0x71e1d2d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:14.276836+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets getting new tickets!
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:15.277225+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _finish_auth 0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:15.278414+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:16.277441+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504380 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4646000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6e2d3, meta 0x71e1d2d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:17.277677+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:18.277832+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:19.278015+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:20.278334+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4646000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6e2d3, meta 0x71e1d2d), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:21.278533+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191823872 unmapped: 59072512 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3504380 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.857537270s of 14.021730423s, submitted: 46
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d69ca000 session 0x5637d5f42700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d69ca000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d69ca000 session 0x5637d6333500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:22.278710+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 59359232 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:23.278884+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191537152 unmapped: 59359232 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc ms_handle_reset ms_handle_reset con 0x5637d88e6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3514601685
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3514601685,v1:192.168.122.100:6801/3514601685]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: get_auth_request con 0x5637d8515000 auth_method 0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:24.279246+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:25.279412+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5770c00 session 0x5637d5790e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b4b800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4646000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6e316, meta 0x71e1cea), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d89fc400 session 0x5637d55bb500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d58cc000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d6511400 session 0x5637d5862000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57e7400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:26.279610+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3506428 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:27.279782+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4646000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6e316, meta 0x71e1cea), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:28.279970+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8515800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:29.280140+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d8515800 session 0x5637d5e4b340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:30.280348+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:31.280510+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508180 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:32.280661+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x40ce4d7/0x4367000, compress 0x0/0x0/0x0, omap 0x6e39c, meta 0x71e1c64), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:33.280903+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:34.281245+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:35.281513+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x40ce4d7/0x4367000, compress 0x0/0x0/0x0, omap 0x6e39c, meta 0x71e1c64), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:36.281656+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508180 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x40ce4d7/0x4367000, compress 0x0/0x0/0x0, omap 0x6e39c, meta 0x71e1c64), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:37.281799+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:38.281987+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:39.282183+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:40.282408+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:41.282584+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x40ce4d7/0x4367000, compress 0x0/0x0/0x0, omap 0x6e39c, meta 0x71e1c64), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508180 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:42.282770+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:43.282946+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f4645000/0x0/0x4ffc00000, data 0x40ce4d7/0x4367000, compress 0x0/0x0/0x0, omap 0x6e39c, meta 0x71e1c64), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57cc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d57cc800 session 0x5637d7e8f340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cb7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.458963394s of 22.483478546s, submitted: 7
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7cb7800 session 0x5637d8553500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:44.283086+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191488000 unmapped: 59408384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cb6000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:45.283231+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7cb6000 session 0x5637d57ac8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57cc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190734336 unmapped: 60162048 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d57cc800 session 0x5637d592ddc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:46.283444+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13fd000/0x0/0x4ffc00000, data 0x7315539/0x75af000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782262 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:47.283612+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13fd000/0x0/0x4ffc00000, data 0x7315539/0x75af000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:48.283736+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:49.283851+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:50.284025+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13fd000/0x0/0x4ffc00000, data 0x7315539/0x75af000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:51.284227+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3782262 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:52.284415+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:53.284593+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:54.284767+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637dbfc0800 session 0x5637d6360c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13fd000/0x0/0x4ffc00000, data 0x7315539/0x75af000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13fd000/0x0/0x4ffc00000, data 0x7315539/0x75af000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:55.284947+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190742528 unmapped: 60153856 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79acc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d79acc00 session 0x5637d63de700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:56.285107+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d80b6400 session 0x5637d79c8c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8434400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.628093719s of 12.246969223s, submitted: 63
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 190873600 unmapped: 60022784 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d8434400 session 0x5637d5790000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3786535 data_alloc: 218103808 data_used: 8199084
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x733955c/0x75d4000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:57.285285+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191176704 unmapped: 59719680 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57cc800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d79acc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:58.285503+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191193088 unmapped: 59703296 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:59.285689+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191193088 unmapped: 59703296 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:00.285955+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191193088 unmapped: 59703296 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:01.286119+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810987 data_alloc: 234881024 data_used: 12360204
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x733955c/0x75d4000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:02.286249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:03.286424+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:04.286546+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:05.286701+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f13d8000/0x0/0x4ffc00000, data 0x733955c/0x75d4000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x71e1fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:06.286867+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3810987 data_alloc: 234881024 data_used: 12360204
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:07.287230+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d89fd400 session 0x5637d52628c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cba000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:08.288249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:09.288558+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:10.288999+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 191397888 unmapped: 59498496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.352595329s of 14.367179871s, submitted: 6
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:11.289127+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4efd78000/0x0/0x4ffc00000, data 0x733955c/0x75d4000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 209584128 unmapped: 41312256 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3850767 data_alloc: 234881024 data_used: 13822062
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:12.289397+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204275712 unmapped: 46620672 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4ef32d000/0x0/0x4ffc00000, data 0x824455c/0x84df000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:13.289689+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 46514176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:14.289893+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eeb96000/0x0/0x4ffc00000, data 0x89db55c/0x8c76000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 46514176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:15.290434+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204382208 unmapped: 46514176 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:16.290791+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3964231 data_alloc: 234881024 data_used: 14809612
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:17.291332+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:18.291517+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:19.291726+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eeb96000/0x0/0x4ffc00000, data 0x89db55c/0x8c76000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:20.292020+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:21.292253+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204390400 unmapped: 46505984 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3964231 data_alloc: 234881024 data_used: 14809612
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.855288506s of 11.385409355s, submitted: 203
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:22.292458+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d57cc800 session 0x5637d651e000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d79acc00 session 0x5637d5790e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204398592 unmapped: 46497792 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd21400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637dbd21400 session 0x5637d79c9340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:23.292643+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:24.292811+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:25.293073+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:26.293257+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957411 data_alloc: 234881024 data_used: 14737932
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:27.293437+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:28.293568+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:29.293737+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:30.293918+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:31.294105+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957411 data_alloc: 234881024 data_used: 14737932
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:32.294254+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:33.294522+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:34.294691+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:35.294831+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:36.295027+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3957411 data_alloc: 234881024 data_used: 14737932
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:37.295228+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:38.295401+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:39.295562+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:40.295768+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.107410431s of 18.151552200s, submitted: 22
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb400 session 0x5637d57addc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204406784 unmapped: 46489600 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7ac00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:41.295920+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3958656 data_alloc: 234881024 data_used: 14741993
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:42.296132+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:43.296286+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:44.296423+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:45.296587+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:46.296805+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959808 data_alloc: 234881024 data_used: 14844905
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:47.296971+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:48.297155+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:49.297375+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:50.297566+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:51.297728+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6e045, meta 0x8381fbb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3959808 data_alloc: 234881024 data_used: 14844905
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:52.297859+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:53.297974+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.175804138s of 13.185276031s, submitted: 5
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204414976 unmapped: 46481408 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:54.298089+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:55.298209+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:56.298352+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972372 data_alloc: 234881024 data_used: 16192489
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:57.298507+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:58.298648+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:59.298816+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:00.298985+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:01.299132+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972372 data_alloc: 234881024 data_used: 16192489
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:02.299295+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:03.299450+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:04.299684+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:05.299852+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:06.300019+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972372 data_alloc: 234881024 data_used: 16192489
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:07.300227+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:08.300419+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204578816 unmapped: 46317568 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:09.301417+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.802107811s of 15.814850807s, submitted: 6
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebb2000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:10.304101+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebb2000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:11.306469+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972340 data_alloc: 234881024 data_used: 16159721
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:12.306908+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:13.308192+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:14.309760+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:15.311370+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebb2000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:16.311577+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3972340 data_alloc: 234881024 data_used: 16159721
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:17.311768+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:18.312740+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:19.313596+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:20.313799+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 204660736 unmapped: 46235648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:21.314525+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebb2000/0x0/0x4ffc00000, data 0x89b755c/0x8c52000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.516518593s of 12.523574829s, submitted: 7
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c7ac00 session 0x5637d76be8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205529088 unmapped: 45367296 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3980252 data_alloc: 234881024 data_used: 18359273
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb400 session 0x5637d6333dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:22.314754+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:23.315843+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:24.316129+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:25.316348+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:26.316527+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3979704 data_alloc: 234881024 data_used: 18355177
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:27.316722+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:28.316895+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4eebba000/0x0/0x4ffc00000, data 0x89b7539/0x8c51000, compress 0x0/0x0/0x0, omap 0x6dbe2, meta 0x838241e), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205545472 unmapped: 45350912 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:29.317075+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d80bb800 session 0x5637d651f880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7b800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c7b800 session 0x5637d6344000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:30.317249+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:31.317489+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:32.317670+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:33.317857+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:34.318212+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:35.318417+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:36.319234+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:37.319471+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203866112 unmapped: 47030272 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:38.319639+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:39.319805+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:40.320022+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:41.320300+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:42.320440+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:43.320631+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:44.320805+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:45.320987+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:46.321200+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:47.321324+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203218944 unmapped: 47677440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:48.321498+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:49.321627+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:50.321760+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:51.321894+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:52.322084+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:53.322176+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:54.322314+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:55.322474+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:56.322601+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:57.322775+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:58.322899+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:59.323056+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:00.323209+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:01.323325+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:02.323483+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:03.323635+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:04.323738+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:05.323884+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:06.324113+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:07.324233+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:08.324351+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:09.324512+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:10.324771+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:11.324968+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:12.325139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:13.325319+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:14.325506+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:15.325663+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:16.325848+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:17.325997+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:18.326127+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:19.326356+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203227136 unmapped: 47669248 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:20.326549+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203235328 unmapped: 47661056 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:21.326689+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8514800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d8514800 session 0x5637d5737a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8613400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d8613400 session 0x5637d57acc40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb400 session 0x5637d79c9a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 203235328 unmapped: 47661056 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3538712 data_alloc: 218103808 data_used: 8207206
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7c7b800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d7c7b800 session 0x5637d8251c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:22.326820+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80bb800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 60.492084503s of 60.603988647s, submitted: 67
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d80bb800 session 0x5637d592a000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f62000/0x0/0x4ffc00000, data 0x40ce4c7/0x4366000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200966144 unmapped: 49930240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:23.326976+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200966144 unmapped: 49930240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:24.327157+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200966144 unmapped: 49930240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:25.327340+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5814800 session 0x5637d76bee00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200966144 unmapped: 49930240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:26.327514+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f34000/0x0/0x4ffc00000, data 0x46404c7/0x48d8000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dc147000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637dc147000 session 0x5637d592da40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f34000/0x0/0x4ffc00000, data 0x46404c7/0x48d8000, compress 0x0/0x0/0x0, omap 0x6d805, meta 0x83827fb), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 200966144 unmapped: 49930240 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3579248 data_alloc: 218103808 data_used: 8211204
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:27.327768+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d51fb400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d51fb400 session 0x5637d5862700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5814800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d5814800 session 0x5637d85521c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:28.327918+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201056256 unmapped: 49840128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f08000/0x0/0x4ffc00000, data 0x466a4fa/0x4904000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f08000/0x0/0x4ffc00000, data 0x466a4fa/0x4904000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:29.328078+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201056256 unmapped: 49840128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:30.328268+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:31.328396+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:32.328550+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614090 data_alloc: 234881024 data_used: 12698372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:33.328669+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:34.328793+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f08000/0x0/0x4ffc00000, data 0x466a4fa/0x4904000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:35.328926+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:36.329098+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:37.329263+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3614090 data_alloc: 234881024 data_used: 12698372
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:38.329394+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2f08000/0x0/0x4ffc00000, data 0x466a4fa/0x4904000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:39.329616+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 201637888 unmapped: 49258496 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.463516235s of 17.626920700s, submitted: 28
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:40.329763+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205488128 unmapped: 45408256 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:41.329896+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f293e000/0x0/0x4ffc00000, data 0x4c2b4fa/0x4ec5000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:42.330091+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662488 data_alloc: 234881024 data_used: 13193988
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:43.330239+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:44.330359+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:45.330531+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f293e000/0x0/0x4ffc00000, data 0x4c2b4fa/0x4ec5000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:46.330875+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:47.331011+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3662488 data_alloc: 234881024 data_used: 13193988
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f293e000/0x0/0x4ffc00000, data 0x4c2b4fa/0x4ec5000, compress 0x0/0x0/0x0, omap 0x6da60, meta 0x83825a0), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:48.331135+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205709312 unmapped: 45187072 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851b000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 ms_handle_reset con 0x5637d851b000 session 0x5637d89bc1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:49.331309+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205725696 unmapped: 45170688 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2946000/0x0/0x4ffc00000, data 0x4c2b50a/0x4ec6000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:50.331503+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205725696 unmapped: 45170688 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f2946000/0x0/0x4ffc00000, data 0x4c2b50a/0x4ec6000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:51.331645+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205725696 unmapped: 45170688 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:52.331757+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 45039616 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3660064 data_alloc: 234881024 data_used: 13193988
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:53.331876+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 45039616 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.316289902s of 13.863844872s, submitted: 73
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:54.332012+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205856768 unmapped: 45039616 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 heartbeat osd_stat(store_statfs(0x4f292b000/0x0/0x4ffc00000, data 0x4c4650a/0x4ee1000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b7800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:55.332139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205987840 unmapped: 44908544 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 486 ms_handle_reset con 0x5637d80b7800 session 0x5637d78d7180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:56.332306+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205684736 unmapped: 45211648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 486 heartbeat osd_stat(store_statfs(0x4f292b000/0x0/0x4ffc00000, data 0x4c4650a/0x4ee1000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:57.332452+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205684736 unmapped: 45211648 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666286 data_alloc: 234881024 data_used: 13198100
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cb6800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 486 handle_osd_map epochs [486,487], i have 486, src has [1,487]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 487 ms_handle_reset con 0x5637d7cb6800 session 0x5637d8d0c700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:58.332587+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 487 heartbeat osd_stat(store_statfs(0x4f2921000/0x0/0x4ffc00000, data 0x4c49c42/0x4ee7000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:59.332751+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:00.332956+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d1800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d57d1800 session 0x5637d636e1c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:01.333091+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:02.333221+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d5771000 session 0x5637d5567500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3673430 data_alloc: 234881024 data_used: 13198100
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:03.333388+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f291f000/0x0/0x4ffc00000, data 0x4c4b840/0x4eeb000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637da271c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d87af800 session 0x5637d8250700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637da271c00 session 0x5637d8320700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:04.333529+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f291f000/0x0/0x4ffc00000, data 0x4c4b840/0x4eeb000, compress 0x0/0x0/0x0, omap 0x6dae6, meta 0x838251a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:05.333721+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:06.333865+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205815808 unmapped: 45080576 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbd20400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:07.333985+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 205824000 unmapped: 45072384 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674318 data_alloc: 234881024 data_used: 13198100
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:08.334105+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.361006737s of 14.408386230s, submitted: 10
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206651392 unmapped: 44244992 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f2899000/0x0/0x4ffc00000, data 0x4cd2850/0x4f73000, compress 0x0/0x0/0x0, omap 0x6db6c, meta 0x8382494), peers [0,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:09.334236+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207740928 unmapped: 43155456 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:10.334398+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207749120 unmapped: 43147264 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637dbd20400 session 0x5637d5737dc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:11.334542+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207749120 unmapped: 43147264 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288c000/0x0/0x4ffc00000, data 0x4cde850/0x4f7f000, compress 0x0/0x0/0x0, omap 0x6db6c, meta 0x8382494), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d57d1800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d57d1800 session 0x5637d525ca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d5771000 session 0x5637d8d0d6c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:12.334684+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 43696128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687988 data_alloc: 234881024 data_used: 13300500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:13.334797+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 43696128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:14.334906+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288b000/0x0/0x4ffc00000, data 0x4cde8c2/0x4f81000, compress 0x0/0x0/0x0, omap 0x6df16, meta 0x83820ea), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207200256 unmapped: 43696128 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8b48000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:15.335068+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 43679744 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:16.335214+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 43679744 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:17.335376+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 43679744 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687938 data_alloc: 234881024 data_used: 13300500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288b000/0x0/0x4ffc00000, data 0x4cde8c2/0x4f81000, compress 0x0/0x0/0x0, omap 0x6df16, meta 0x83820ea), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d8b48000 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:18.335527+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 43679744 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:19.335736+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207216640 unmapped: 43679744 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851a800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d87af000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.759981155s of 11.507591248s, submitted: 34
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d87af000 session 0x5637d63328c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d851a800 session 0x5637d5566fc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:20.335994+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288a000/0x0/0x4ffc00000, data 0x4cde924/0x4f82000, compress 0x0/0x0/0x0, omap 0x6df9c, meta 0x8382064), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:21.336211+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288a000/0x0/0x4ffc00000, data 0x4cde924/0x4f82000, compress 0x0/0x0/0x0, omap 0x6df9c, meta 0x8382064), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:22.336495+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288a000/0x0/0x4ffc00000, data 0x4cde924/0x4f82000, compress 0x0/0x0/0x0, omap 0x6df9c, meta 0x8382064), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3689412 data_alloc: 234881024 data_used: 13304694
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:23.336676+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:24.336812+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:25.336900+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:26.337139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d5771000 session 0x5637d8619a40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:27.337269+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207224832 unmapped: 43671552 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3691427 data_alloc: 234881024 data_used: 13292406
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80ba400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d80ba400 session 0x5637d8618c40
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637dbfc0c00 session 0x5637d8321880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:28.337397+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288a000/0x0/0x4ffc00000, data 0x4cde924/0x4f82000, compress 0x0/0x0/0x0, omap 0x6df9c, meta 0x8382064), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 43663360 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f288a000/0x0/0x4ffc00000, data 0x4cde924/0x4f82000, compress 0x0/0x0/0x0, omap 0x6df9c, meta 0x8382064), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:29.337536+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8510000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207233024 unmapped: 43663360 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d8510000 session 0x5637d8320700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8519400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d8519400 session 0x5637d8062000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:30.337713+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207249408 unmapped: 43646976 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d5771000 session 0x5637d8619500
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80ba400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.051649094s of 11.139719009s, submitted: 56
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d80ba400 session 0x5637d55676c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:31.337836+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 43638784 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d8510000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 ms_handle_reset con 0x5637d8510000 session 0x5637d8321180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637dbfc0c00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:32.337998+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 heartbeat osd_stat(store_statfs(0x4f2914000/0x0/0x4ffc00000, data 0x4c4b840/0x4eeb000, compress 0x0/0x0/0x0, omap 0x6e40f, meta 0x8381bf1), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 43638784 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3679603 data_alloc: 234881024 data_used: 13288114
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 488 handle_osd_map epochs [488,489], i have 489, src has [1,489]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 489 ms_handle_reset con 0x5637dbfc0c00 session 0x5637d525d880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d851b000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 489 ms_handle_reset con 0x5637d851b000 session 0x5637d57ac8c0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:33.338110+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207257600 unmapped: 43638784 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 489 handle_osd_map epochs [490,490], i have 489, src has [1,490]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 490 ms_handle_reset con 0x5637d5771000 session 0x5637d57ad880
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:34.338225+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d975ec00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 490 ms_handle_reset con 0x5637d975ec00 session 0x5637d5862700
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d80b6400
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 43630592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 490 handle_osd_map epochs [490,491], i have 490, src has [1,491]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637d80b6400 session 0x5637d89bd340
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:35.338334+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207265792 unmapped: 43630592 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d7cb9800
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637d7cb9800 session 0x5637d8320e00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d576fc00
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637d576fc00 session 0x5637d55ba540
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:36.338475+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 43622400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:37.338656+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637dbd20800 session 0x5637d55bb180
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637dbfc0800 session 0x5637d636fdc0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 207273984 unmapped: 43622400 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685693 data_alloc: 234881024 data_used: 13189696
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: handle_auth_request added challenge on 0x5637d5771000
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 ms_handle_reset con 0x5637d5771000 session 0x5637d525ca80
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 heartbeat osd_stat(store_statfs(0x4f3468000/0x0/0x4ffc00000, data 0x4102b9e/0x43a4000, compress 0x0/0x0/0x0, omap 0x6e8f8, meta 0x8381708), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:38.338801+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 44605440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 491 handle_osd_map epochs [491,492], i have 491, src has [1,492]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:39.338950+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:40.339118+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:41.339292+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:42.339458+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3581934 data_alloc: 218103808 data_used: 8219200
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 492 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40da606/0x437b000, compress 0x0/0x0/0x0, omap 0x6eb96, meta 0x838146a), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:43.339611+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.745372772s of 12.902443886s, submitted: 101
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:44.339788+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:45.339930+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:46.340108+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:47.340221+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:48.340345+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:49.340518+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:50.340694+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:51.340823+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:52.340951+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _renew_subs
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:53.341112+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:54.341294+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:55.341434+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:56.341588+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:57.341690+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:58.341819+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:59.341949+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:00.342096+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:01.342225+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:02.342353+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:03.342481+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:04.342648+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:05.342840+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:06.343017+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:07.343222+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:08.343394+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:09.343578+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:10.343761+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:11.343915+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:12.344092+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:13.344242+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:14.344372+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3001.6 total, 600.0 interval
                                           Cumulative writes: 31K writes, 120K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 31K writes, 11K syncs, 2.69 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 3926 writes, 12K keys, 3926 commit groups, 1.0 writes per commit group, ingest: 16.91 MB, 0.03 MB/s
                                           Interval WAL: 3926 writes, 1702 syncs, 2.31 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:15.344479+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:16.344571+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:17.344702+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:18.344839+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:19.344957+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:20.345138+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:21.345265+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:22.345377+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:23.345511+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:24.345645+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:25.345767+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:26.345895+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:27.346148+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:28.346267+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:29.346380+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:30.346522+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:31.346658+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:32.346791+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:33.347147+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:34.347262+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:35.347405+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:36.347534+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:37.347681+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:38.347813+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:39.348217+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:40.348374+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:41.348531+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:42.348695+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:43.348846+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:44.349012+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:45.349170+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:46.349319+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:47.349471+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:48.349608+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:49.349762+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:50.349927+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:51.350105+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:52.350215+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:53.350346+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:54.350458+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:55.350575+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:56.350688+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:57.350795+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:58.383270+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:59.383397+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:00.383646+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:01.383746+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:02.383910+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:03.384094+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:04.384240+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:05.384366+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:06.384472+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:07.384599+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:08.384747+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:09.384846+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:10.385004+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:11.385170+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:12.385297+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3584644 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:13.385414+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:14.385555+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:15.385714+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348c000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206282752 unmapped: 44613632 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 92.124504089s of 92.157058716s, submitted: 13
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:16.385893+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 44605440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:17.386022+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583924 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206290944 unmapped: 44605440 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:18.386191+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206323712 unmapped: 44572672 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:19.386333+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:20.386481+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:21.386604+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:22.386726+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583924 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:23.386856+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:24.386974+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:25.387109+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:26.387329+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:27.387513+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583924 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:28.387668+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206340096 unmapped: 44556288 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: osd.1 493 heartbeat osd_stat(store_statfs(0x4f348e000/0x0/0x4ffc00000, data 0x40dc085/0x437e000, compress 0x0/0x0/0x0, omap 0x6ec1c, meta 0x83813e4), peers [0,2] op hist [])
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:29.387804+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206479360 unmapped: 44417024 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'config diff' '{prefix=config diff}'
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'config show' '{prefix=config show}'
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:30.387962+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206266368 unmapped: 44630016 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:31.388139+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206528512 unmapped: 44367872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: tick
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_tickets
Dec 13 04:36:03 compute-0 ceph-osd[86683]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:32.388277+0000)
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:03 compute-0 ceph-osd[86683]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:03 compute-0 ceph-osd[86683]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583924 data_alloc: 218103808 data_used: 8223261
Dec 13 04:36:03 compute-0 ceph-osd[86683]: prioritycache tune_memory target: 4294967296 mapped: 206528512 unmapped: 44367872 heap: 250896384 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:03 compute-0 ceph-osd[86683]: do_command 'log dump' '{prefix=log dump}'
Dec 13 04:36:03 compute-0 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec 13 04:36:03 compute-0 rsyslogd[1004]: imjournal from <np0005557965:ceph-osd>: begin to drop messages due to rate-limiting
Dec 13 04:36:03 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Dec 13 04:36:03 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1892744062' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1836628913' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: from='client.19284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1892744062' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Dec 13 04:36:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/10231837' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Dec 13 04:36:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842210269' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 04:36:04 compute-0 nova_compute[243704]: 2025-12-13 04:36:04.494 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Dec 13 04:36:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/375812606' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Dec 13 04:36:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1404471346' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 04:36:04 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Dec 13 04:36:04 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4113286460' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/10231837' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: pgmap v2054: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 29 op/s
Dec 13 04:36:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2842210269' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/375812606' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1404471346' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4113286460' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Dec 13 04:36:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1725655931' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Dec 13 04:36:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680724823' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 04:36:05 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Dec 13 04:36:05 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/147329113' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1725655931' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1680724823' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/147329113' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Dec 13 04:36:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4026654356' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Dec 13 04:36:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3508399269' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Dec 13 04:36:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4162105748' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 04:36:06 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Dec 13 04:36:06 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2109343024' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4026654356' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: pgmap v2055: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3508399269' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/4162105748' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2109343024' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Dec 13 04:36:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3483980473' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Dec 13 04:36:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2753953249' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 04:36:07 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:07 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Dec 13 04:36:07 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/535808673' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:49.126719+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135659520 unmapped: 25886720 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:50.126878+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135659520 unmapped: 25886720 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1863780 data_alloc: 251658240 data_used: 30648058
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 heartbeat osd_stat(store_statfs(0x4f7f4d000/0x0/0x4ffc00000, data 0x3e0574e/0x3f3f000, compress 0x0/0x0/0x0, omap 0x3006f, meta 0x3d3ff91), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:51.127170+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 25853952 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 heartbeat osd_stat(store_statfs(0x4f7f4d000/0x0/0x4ffc00000, data 0x3e0574e/0x3f3f000, compress 0x0/0x0/0x0, omap 0x3006f, meta 0x3d3ff91), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:52.127374+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 25853952 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:53.127587+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 25853952 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 heartbeat osd_stat(store_statfs(0x4f7f4d000/0x0/0x4ffc00000, data 0x3e0574e/0x3f3f000, compress 0x0/0x0/0x0, omap 0x3006f, meta 0x3d3ff91), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f304f000 session 0x55a1f4320c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:54.127758+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.338025093s of 12.683547020s, submitted: 71
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f4fa5800 session 0x55a1f43ba8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f41d01c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135692288 unmapped: 25853952 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f4fa5800 session 0x55a1f3e308c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f304e800 session 0x55a1f43bac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304ec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 ms_handle_reset con 0x55a1f304ec00 session 0x55a1f4221180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:55.127904+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135979008 unmapped: 25567232 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1872033 data_alloc: 251658240 data_used: 30648156
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 handle_osd_map epochs [207,207], i have 206, src has [1,207]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 206 handle_osd_map epochs [206,207], i have 207, src has [1,207]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:56.128111+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135995392 unmapped: 25550848 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7f05000/0x0/0x4ffc00000, data 0x3e472a2/0x3f85000, compress 0x0/0x0/0x0, omap 0x3053d, meta 0x3d3fac3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f304f000 session 0x55a1f2735c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:57.128296+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f304f000 session 0x55a1f24ff6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f304e800 session 0x55a1f4361340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 24977408 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:58.128401+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 24977408 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304ec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f304ec00 session 0x55a1f4321a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:11:59.128575+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7683000/0x0/0x4ffc00000, data 0x46ca304/0x4809000, compress 0x0/0x0/0x0, omap 0x305bd, meta 0x3d3fa43), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 24977408 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa5800 session 0x55a1f267e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:00.128726+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 24977408 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f267f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f20adc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1938783 data_alloc: 251658240 data_used: 30652252
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:01.128972+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4619800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4619800 session 0x55a1f4221dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 24961024 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435800 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:02.129091+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 137510912 unmapped: 24035328 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7682000/0x0/0x4ffc00000, data 0x46ca314/0x480a000, compress 0x0/0x0/0x0, omap 0x30982, meta 0x3d3f67e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:03.129242+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435000 session 0x55a1f43936c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32e1000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 145637376 unmapped: 15908864 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f32e1000 session 0x55a1f20ac000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:04.129363+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4619800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:05.129505+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1993826 data_alloc: 251658240 data_used: 39089500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:06.129637+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:07.129761+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7657000/0x0/0x4ffc00000, data 0x46f4337/0x4835000, compress 0x0/0x0/0x0, omap 0x30a04, meta 0x3d3f5fc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:08.129948+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.645045280s of 14.847893715s, submitted: 119
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:09.130099+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:10.130245+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 146055168 unmapped: 15491072 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1993826 data_alloc: 251658240 data_used: 39089500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7657000/0x0/0x4ffc00000, data 0x46f4337/0x4835000, compress 0x0/0x0/0x0, omap 0x309a5, meta 0x3d3f65b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:11.130364+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155049984 unmapped: 6496256 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:12.130468+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435000 session 0x55a1f43216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154402816 unmapped: 7143424 heap: 161546240 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7657000/0x0/0x4ffc00000, data 0x46f4337/0x4835000, compress 0x0/0x0/0x0, omap 0x30a27, meta 0x3d3f5d9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:13.130612+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158277632 unmapped: 6471680 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:14.130823+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159252480 unmapped: 5496832 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:15.130962+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159285248 unmapped: 5464064 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7305000/0x0/0x4ffc00000, data 0x4a46337/0x4b87000, compress 0x0/0x0/0x0, omap 0x30a27, meta 0x3d3f5d9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2077565 data_alloc: 268435456 data_used: 48050524
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435800 session 0x55a1f4012e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:16.131092+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159318016 unmapped: 5431296 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f41e1a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:17.131304+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cd400 session 0x55a1f19c6540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f3bc8540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 5627904 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:18.131450+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159121408 unmapped: 5627904 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:19.131563+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156262400 unmapped: 8486912 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7297000/0x0/0x4ffc00000, data 0x4ab4337/0x4bf5000, compress 0x0/0x0/0x0, omap 0x30aa9, meta 0x3d3f557), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:20.131679+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156303360 unmapped: 8445952 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2078653 data_alloc: 268435456 data_used: 48050524
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f7297000/0x0/0x4ffc00000, data 0x4ab4337/0x4bf5000, compress 0x0/0x0/0x0, omap 0x30aa9, meta 0x3d3f557), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:21.131814+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156368896 unmapped: 8380416 heap: 164749312 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.903845787s of 12.949433327s, submitted: 22
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:22.132140+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 6946816 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f41e16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:23.132284+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158859264 unmapped: 6946816 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:24.132416+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160194560 unmapped: 5611520 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f3bc8e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:25.132590+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f6647000/0x0/0x4ffc00000, data 0x56fb337/0x583c000, compress 0x0/0x0/0x0, omap 0x30bad, meta 0x3d3f453), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160243712 unmapped: 5562368 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2164737 data_alloc: 268435456 data_used: 48272748
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:26.132742+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160260096 unmapped: 5545984 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:27.132881+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435000 session 0x55a1f196da40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161226752 unmapped: 4579328 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:28.133015+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cd400 session 0x55a1f19c7500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161243136 unmapped: 4562944 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435800 session 0x55a1f3e31180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435800 session 0x55a1f39d3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:29.133147+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161415168 unmapped: 4390912 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:30.133297+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f664f000/0x0/0x4ffc00000, data 0x56fb337/0x583c000, compress 0x0/0x0/0x0, omap 0x30a5e, meta 0x3d3f5a2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f196d880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161431552 unmapped: 4374528 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2170096 data_alloc: 268435456 data_used: 50099564
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa4000 session 0x55a1f42216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa5400 session 0x55a1f41cc1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:31.133400+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f41e08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f69aa000/0x0/0x4ffc00000, data 0x535c2c5/0x549b000, compress 0x0/0x0/0x0, omap 0x30f73, meta 0x3d3f08d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160145408 unmapped: 5660672 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.843641281s of 10.154753685s, submitted: 198
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:32.133524+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f2735500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 heartbeat osd_stat(store_statfs(0x4f69aa000/0x0/0x4ffc00000, data 0x535c2c5/0x549b000, compress 0x0/0x0/0x0, omap 0x30f73, meta 0x3d3f08d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 5636096 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:33.133655+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 5636096 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f186e8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:34.134874+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160169984 unmapped: 5636096 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f4fa5400 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f5435800 session 0x55a1f2735180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:35.135057+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cd400 session 0x55a1f4360540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 ms_handle_reset con 0x55a1f45cd400 session 0x55a1f267e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160309248 unmapped: 5496832 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2140517 data_alloc: 268435456 data_used: 49723226
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:36.135185+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 handle_osd_map epochs [208,208], i have 207, src has [1,208]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 207 handle_osd_map epochs [207,208], i have 208, src has [1,208]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 208 heartbeat osd_stat(store_statfs(0x4f69ea000/0x0/0x4ffc00000, data 0x5362de1/0x54a0000, compress 0x0/0x0/0x0, omap 0x3165b, meta 0x3d3e9a5), peers [1,2] op hist [1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 208 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f4221a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153231360 unmapped: 12574720 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 208 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f3e30e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 208 ms_handle_reset con 0x55a1f4fa5400 session 0x55a1f271f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:37.135321+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 208 handle_osd_map epochs [208,209], i have 208, src has [1,209]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f5435800 session 0x55a1f4321180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153280512 unmapped: 12525568 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f4fa4000 session 0x55a1f4361880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f267f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:38.135467+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f43a6700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153313280 unmapped: 12492800 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f4619800 session 0x55a1f267fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f304e000 session 0x55a1f3e30000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f267e8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f304e400 session 0x55a1f43ba700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:39.135599+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 heartbeat osd_stat(store_statfs(0x4f7dd0000/0x0/0x4ffc00000, data 0x3f7e98b/0x40bc000, compress 0x0/0x0/0x0, omap 0x32430, meta 0x3d3dbd0), peers [1,2] op hist [1,4])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f3b02e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 42926080 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4619800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 ms_handle_reset con 0x55a1f4619800 session 0x55a1f43bac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:40.135720+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 42926080 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549071 data_alloc: 218103808 data_used: 4954164
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 heartbeat osd_stat(store_statfs(0x4fa81c000/0x0/0x4ffc00000, data 0x1198955/0x12d5000, compress 0x0/0x0/0x0, omap 0x3288b, meta 0x3d3d775), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:41.146249+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122880000 unmapped: 42926080 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:42.146399+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 209 handle_osd_map epochs [210,210], i have 209, src has [1,210]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.521913528s of 10.031078339s, submitted: 241
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f304e400 session 0x55a1f4220e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f41cca80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122888192 unmapped: 42917888 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f2604380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:43.146630+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f4392540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabb4000/0x0/0x4ffc00000, data 0x119a39a/0x12d6000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f4fa4000 session 0x55a1f3e31180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:44.146874+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:45.147055+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546650 data_alloc: 218103808 data_used: 4705580
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:46.147326+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:47.147474+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:48.147620+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:49.147779+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:50.147911+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546650 data_alloc: 218103808 data_used: 4705580
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:51.148087+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:52.148905+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:53.149081+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:54.149341+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:55.149570+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546650 data_alloc: 218103808 data_used: 4705580
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:56.149720+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:57.150195+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:58.150399+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:12:59.150541+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:00.150634+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546650 data_alloc: 218103808 data_used: 4705580
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:01.150856+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:02.151095+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:03.151310+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f304e400 session 0x55a1f271ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:04.151534+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf5000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:05.153187+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1546650 data_alloc: 218103808 data_used: 4705580
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:06.153332+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:07.153667+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.790245056s of 25.026144028s, submitted: 56
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f3e30fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:08.154155+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:09.154398+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:10.154604+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122937344 unmapped: 42868736 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545866 data_alloc: 218103808 data_used: 4709577
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:11.154837+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f4392000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:12.155006+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:13.155264+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:14.155447+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:15.155681+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545866 data_alloc: 218103808 data_used: 4709577
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:16.155821+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:17.156034+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:18.156270+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:19.156491+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:20.156675+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1545866 data_alloc: 218103808 data_used: 4709577
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:21.156904+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f41d1a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:22.157100+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:23.157302+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fabf8000/0x0/0x4ffc00000, data 0x115a327/0x1294000, compress 0x0/0x0/0x0, omap 0x32ba9, meta 0x3d3d457), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.114767075s of 16.255472183s, submitted: 10
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45cd400 session 0x55a1f2735dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f304e400 session 0x55a1f27356c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:24.157436+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:25.157593+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122945536 unmapped: 42860544 heap: 165806080 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1549278 data_alloc: 218103808 data_used: 4709577
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:26.158121+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f2735880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f39d3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f4fa5400 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122863616 unmapped: 47144960 heap: 170008576 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f5435000 session 0x55a1f43608c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:27.158541+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f5435000 session 0x55a1f4321c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f4360fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122814464 unmapped: 47194112 heap: 170008576 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:28.158733+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 heartbeat osd_stat(store_statfs(0x4fa3a0000/0x0/0x4ffc00000, data 0x19af44e/0x1aec000, compress 0x0/0x0/0x0, omap 0x32c75, meta 0x3d3d38b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122814464 unmapped: 47194112 heap: 170008576 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:29.158955+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f304e400 session 0x55a1efddfc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f43bb180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 210 handle_osd_map epochs [211,211], i have 210, src has [1,211]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 211 ms_handle_reset con 0x55a1f4fa5400 session 0x55a1f2734e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122372096 unmapped: 57614336 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:30.159160+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 211 handle_osd_map epochs [212,212], i have 211, src has [1,212]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f45cd800 session 0x55a1f186f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f4392a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122380288 unmapped: 57606144 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1692123 data_alloc: 218103808 data_used: 4713673
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:31.159371+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122380288 unmapped: 57606144 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:32.159619+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f304e400 session 0x55a1f4360380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122380288 unmapped: 57606144 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:33.159801+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122380288 unmapped: 57606144 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:34.159982+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 heartbeat osd_stat(store_statfs(0x4f97b2000/0x0/0x4ffc00000, data 0x2596bf8/0x26d8000, compress 0x0/0x0/0x0, omap 0x3365a, meta 0x3d3c9a6), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 122380288 unmapped: 57606144 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:35.160174+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f43ba700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f20ad180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.766864777s of 12.102074623s, submitted: 129
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f271e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 120922112 unmapped: 59064320 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1768176 data_alloc: 218103808 data_used: 4713689
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:36.160336+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 212 handle_osd_map epochs [213,213], i have 212, src has [1,213]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f3b02700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842000 session 0x55a1f24fe380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 120946688 unmapped: 59039744 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f5435000 session 0x55a1f43ba000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:37.160498+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 59031552 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:38.160789+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8afc000/0x0/0x4ffc00000, data 0x324b794/0x338e000, compress 0x0/0x0/0x0, omap 0x33794, meta 0x3d3c86c), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f3e316c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 59031552 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:39.160952+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 120954880 unmapped: 59031552 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:40.161132+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8afc000/0x0/0x4ffc00000, data 0x324b794/0x338e000, compress 0x0/0x0/0x0, omap 0x33794, meta 0x3d3c86c), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842000 session 0x55a1f4361c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f196ce00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121274368 unmapped: 58712064 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1775832 data_alloc: 218103808 data_used: 4713689
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:41.161279+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:42.161489+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:43.161728+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:44.161968+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8ad9000/0x0/0x4ffc00000, data 0x326f7b7/0x33b3000, compress 0x0/0x0/0x0, omap 0x33b3c, meta 0x3d3c4c4), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:45.162203+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1776092 data_alloc: 218103808 data_used: 4715737
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:46.162381+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.481413841s of 11.616690636s, submitted: 51
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842800 session 0x55a1f4320700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:47.162563+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121290752 unmapped: 58695680 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f43bb340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f5435000 session 0x55a1f41d1880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:48.162721+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8ad8000/0x0/0x4ffc00000, data 0x326f819/0x33b4000, compress 0x0/0x0/0x0, omap 0x33c1d, meta 0x3d3c3e3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f20ac540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 58687488 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:49.162985+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842000 session 0x55a1f3bc81c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121298944 unmapped: 58687488 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:50.163145+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f3e30700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842800 session 0x55a1f39d2000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842800 session 0x55a1f39d3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 121462784 unmapped: 58523648 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842000 session 0x55a1f27348c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:51.164984+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1781691 data_alloc: 218103808 data_used: 4717750
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4220540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 123412480 unmapped: 56573952 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f4221880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:52.165118+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8376000/0x0/0x4ffc00000, data 0x39d17a3/0x3b15000, compress 0x0/0x0/0x0, omap 0x33ede, meta 0x3d3c122), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128933888 unmapped: 51052544 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:53.165386+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 51027968 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:54.165536+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842c00 session 0x55a1f19c7dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 51027968 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:55.165658+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128958464 unmapped: 51027968 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842000 session 0x55a1f196c540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:56.165766+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1911155 data_alloc: 234881024 data_used: 16819398
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f186ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128966656 unmapped: 51019776 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:57.165888+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128696320 unmapped: 51290112 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:58.165989+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 heartbeat osd_stat(store_statfs(0x4f8376000/0x0/0x4ffc00000, data 0x39d17c6/0x3b16000, compress 0x0/0x0/0x0, omap 0x34193, meta 0x3d3be6d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.077171326s of 11.268367767s, submitted: 99
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f43ba1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1843000 session 0x55a1f20ac700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1843400 session 0x55a1f41cc700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1843400 session 0x55a1f43a7180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133939200 unmapped: 46047232 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:13:59.166172+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842800 session 0x55a1f45f3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133947392 unmapped: 46039040 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:00.166294+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4309a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 213 handle_osd_map epochs [214,214], i have 213, src has [1,214]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842000 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:01.166431+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133300224 unmapped: 46686208 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1920990 data_alloc: 234881024 data_used: 16823395
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:02.166557+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 46678016 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:03.166701+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133308416 unmapped: 46678016 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:04.166803+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136765440 unmapped: 43220992 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f81ff000/0x0/0x4ffc00000, data 0x3b43403/0x3c8a000, compress 0x0/0x0/0x0, omap 0x347fd, meta 0x3d3b803), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:05.166941+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136937472 unmapped: 43048960 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:06.167155+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 137043968 unmapped: 42942464 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2030460 data_alloc: 234881024 data_used: 18011200
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:07.167306+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 137043968 unmapped: 42942464 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.1 total, 600.0 interval
                                           Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s
                                           Cumulative WAL: 12K writes, 4008 syncs, 3.17 writes per sync, written: 0.03 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6775 writes, 19K keys, 6775 commit groups, 1.0 writes per commit group, ingest: 14.65 MB, 0.02 MB/s
                                           Interval WAL: 6775 writes, 2979 syncs, 2.27 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:08.167425+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 137043968 unmapped: 42942464 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f7b77000/0x0/0x4ffc00000, data 0x46b5403/0x4315000, compress 0x0/0x0/0x0, omap 0x34cfe, meta 0x3d3b302), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.741771698s of 10.090176582s, submitted: 167
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f45cc000 session 0x55a1f43ba380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196d6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842000 session 0x55a1f4393340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4308a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:09.167558+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 43155456 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:10.167697+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 43155456 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f77a5000/0x0/0x4ffc00000, data 0x4a86465/0x46e7000, compress 0x0/0x0/0x0, omap 0x34d80, meta 0x3d3b280), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:11.167873+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 43155456 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2056493 data_alloc: 234881024 data_used: 18015394
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:12.168023+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136830976 unmapped: 43155456 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f43a6fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f5435000 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842000 session 0x55a1f4308e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f77a5000/0x0/0x4ffc00000, data 0x4a86465/0x46e7000, compress 0x0/0x0/0x0, omap 0x34d80, meta 0x3d3b280), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:13.168196+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 43950080 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4230c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f4308fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:14.168361+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 43950080 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842800 session 0x55a1f4321a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842000 session 0x55a1f4266a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4267a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f1843000 session 0x55a1f41d1340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:15.168526+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136044544 unmapped: 43941888 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32de000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 heartbeat osd_stat(store_statfs(0x4f77c7000/0x0/0x4ffc00000, data 0x4a62498/0x46c5000, compress 0x0/0x0/0x0, omap 0x34d8f, meta 0x3d3b271), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 214 handle_osd_map epochs [215,215], i have 214, src has [1,215]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 215 ms_handle_reset con 0x55a1f32de000 session 0x55a1f43a6540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:16.168671+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 215 ms_handle_reset con 0x55a1f1842000 session 0x55a1f26fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 126492672 unmapped: 53493760 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1829213 data_alloc: 218103808 data_used: 4725906
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 215 handle_osd_map epochs [216,216], i have 215, src has [1,216]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4361880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:17.168811+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1886000 session 0x55a1f43a7c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f40136c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 126730240 unmapped: 53256192 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4267340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1843400 session 0x55a1f4230e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 heartbeat osd_stat(store_statfs(0x4f9eb1000/0x0/0x4ffc00000, data 0x1e07ceb/0x1f50000, compress 0x0/0x0/0x0, omap 0x35b40, meta 0x3d3a4c0), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3b02000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1843800 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1843c00 session 0x55a1f43bb6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:18.168959+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129073152 unmapped: 50913280 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.654362679s of 10.066776276s, submitted: 252
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 ms_handle_reset con 0x55a1f1842000 session 0x55a1f422efc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:19.169153+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 50929664 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:20.169289+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129056768 unmapped: 50929664 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 heartbeat osd_stat(store_statfs(0x4fa311000/0x0/0x4ffc00000, data 0x1a36c46/0x1b7b000, compress 0x0/0x0/0x0, omap 0x35b55, meta 0x3d3a4ab), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 216 handle_osd_map epochs [217,217], i have 216, src has [1,217]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:21.169407+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 217 ms_handle_reset con 0x55a1f1842000 session 0x55a1f43a6380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129064960 unmapped: 50921472 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1742128 data_alloc: 234881024 data_used: 13788375
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 217 ms_handle_reset con 0x55a1f1843000 session 0x55a1f42301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:22.169529+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 50987008 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:23.169715+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 50987008 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 217 ms_handle_reset con 0x55a1f1843400 session 0x55a1f42201c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 217 handle_osd_map epochs [218,218], i have 217, src has [1,218]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f1843800 session 0x55a1f4220700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:24.169862+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128999424 unmapped: 50987008 heap: 179986432 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f1843c00 session 0x55a1f422fa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 heartbeat osd_stat(store_statfs(0x4fa30a000/0x0/0x4ffc00000, data 0x1a3a28b/0x1b80000, compress 0x0/0x0/0x0, omap 0x35fc3, meta 0x3d3a03d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f1842000 session 0x55a1f39d2fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:25.170064+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 64561152 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 heartbeat osd_stat(store_statfs(0x4fa3dc000/0x0/0x4ffc00000, data 0x196825d/0x1aaf000, compress 0x0/0x0/0x0, omap 0x35fc3, meta 0x3d3a03d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f32df400 session 0x55a1f26ff500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:26.170188+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 123822080 unmapped: 64561152 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1812916 data_alloc: 218103808 data_used: 4742049
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:27.170308+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 123846656 unmapped: 64536576 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4231500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:28.171163+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 123912192 unmapped: 64471040 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.557429314s of 10.005439758s, submitted: 131
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:29.171334+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 124174336 unmapped: 64208896 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 ms_handle_reset con 0x55a1f1886000 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:30.171521+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 124231680 unmapped: 64151552 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 218 handle_osd_map epochs [219,219], i have 218, src has [1,219]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:31.171692+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133865472 unmapped: 54517760 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2420832 data_alloc: 218103808 data_used: 4746660
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 heartbeat osd_stat(store_statfs(0x4f23d8000/0x0/0x4ffc00000, data 0x9969cea/0x9ab2000, compress 0x0/0x0/0x0, omap 0x3655d, meta 0x3d39aa3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:32.171853+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 125566976 unmapped: 62816256 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1887c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f1887c00 session 0x55a1f4309500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f1842000 session 0x55a1f4012a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f1886000 session 0x55a1f4013c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:33.172071+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 143278080 unmapped: 45105152 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:34.172216+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 126590976 unmapped: 61792256 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 heartbeat osd_stat(store_statfs(0x4eeb96000/0x0/0x4ffc00000, data 0xd1adcfa/0xd2f6000, compress 0x0/0x0/0x0, omap 0x3686b, meta 0x3d39795), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f32df400 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:35.172454+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135045120 unmapped: 53338112 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:36.172599+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 127705088 unmapped: 60678144 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2707738 data_alloc: 218103808 data_used: 4746660
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:37.172757+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128868352 unmapped: 59514880 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 ms_handle_reset con 0x55a1f1886400 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 heartbeat osd_stat(store_statfs(0x4ed395000/0x0/0x4ffc00000, data 0xe9add0a/0xeaf7000, compress 0x0/0x0/0x0, omap 0x3686b, meta 0x3d39795), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:38.172937+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129105920 unmapped: 59277312 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.518174171s of 10.257237434s, submitted: 149
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:39.173136+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 handle_osd_map epochs [220,220], i have 219, src has [1,220]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 219 handle_osd_map epochs [219,220], i have 220, src has [1,220]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f65d6000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45cd800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 129122304 unmapped: 59260928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 220 ms_handle_reset con 0x55a1f45cd800 session 0x55a1f271f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:40.173315+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136658944 unmapped: 51724288 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 220 handle_osd_map epochs [220,221], i have 220, src has [1,221]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 221 ms_handle_reset con 0x55a1f65d6000 session 0x55a1f4360540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 221 ms_handle_reset con 0x55a1f1842000 session 0x55a1f43a6000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 221 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f4221a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:41.173560+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 60039168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011782 data_alloc: 218103808 data_used: 4746676
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:42.173769+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128344064 unmapped: 60039168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 221 handle_osd_map epochs [221,222], i have 221, src has [1,222]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 ms_handle_reset con 0x55a1f1886000 session 0x55a1f45f2380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:43.174020+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 ms_handle_reset con 0x55a1f1886400 session 0x55a1f43a6c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 heartbeat osd_stat(store_statfs(0x4eab89000/0x0/0x4ffc00000, data 0x111b3094/0x11301000, compress 0x0/0x0/0x0, omap 0x370eb, meta 0x3d38f15), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128376832 unmapped: 60006400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:44.174255+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 ms_handle_reset con 0x55a1f1842000 session 0x55a1f41d1180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 136929280 unmapped: 51453952 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:45.174412+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128540672 unmapped: 59842560 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 handle_osd_map epochs [223,223], i have 222, src has [1,223]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 222 handle_osd_map epochs [222,223], i have 223, src has [1,223]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:46.174579+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 223 ms_handle_reset con 0x55a1f1843000 session 0x55a1f422f880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 59826176 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3143600 data_alloc: 218103808 data_used: 4747261
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 223 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f196c8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:47.174765+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128557056 unmapped: 59826176 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 223 handle_osd_map epochs [224,224], i have 223, src has [1,224]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f65d6000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 224 ms_handle_reset con 0x55a1f65d6000 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:48.174909+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128581632 unmapped: 59801600 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 224 ms_handle_reset con 0x55a1f304e400 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 224 handle_osd_map epochs [224,225], i have 224, src has [1,225]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.978695869s of 10.116275787s, submitted: 97
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4308000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 ms_handle_reset con 0x55a1f1886000 session 0x55a1f41d0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:49.175051+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 heartbeat osd_stat(store_statfs(0x4e937f000/0x0/0x4ffc00000, data 0x129b6876/0x12b09000, compress 0x0/0x0/0x0, omap 0x37a5f, meta 0x3d385a1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f4230a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128483328 unmapped: 59899904 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f4308700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f65d6000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 ms_handle_reset con 0x55a1f65d6000 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:50.175226+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 128532480 unmapped: 59850752 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 225 handle_osd_map epochs [226,226], i have 225, src has [1,226]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 226 ms_handle_reset con 0x55a1f1886000 session 0x55a1f41e16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:51.175437+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 57933824 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1828832 data_alloc: 234881024 data_used: 11996783
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 226 heartbeat osd_stat(store_statfs(0x4f8351000/0x0/0x4ffc00000, data 0x19e4113/0x1b36000, compress 0x0/0x0/0x0, omap 0x38f88, meta 0x3d37078), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:52.175712+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130449408 unmapped: 57933824 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:53.175951+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 57917440 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 226 ms_handle_reset con 0x55a1f32df400 session 0x55a1f2734e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:54.176221+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130465792 unmapped: 57917440 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 226 handle_osd_map epochs [226,227], i have 226, src has [1,227]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 227 ms_handle_reset con 0x55a1f413e000 session 0x55a1f2734540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:55.176464+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 57892864 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 227 heartbeat osd_stat(store_statfs(0x4fa34f000/0x0/0x4ffc00000, data 0x19e5d59/0x1b3b000, compress 0x0/0x0/0x0, omap 0x3be34, meta 0x3d341cc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 227 handle_osd_map epochs [228,228], i have 227, src has [1,228]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 227 handle_osd_map epochs [227,228], i have 228, src has [1,228]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 228 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f2735880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 228 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f26fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:56.176715+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 57892864 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1839437 data_alloc: 234881024 data_used: 12000829
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:57.176996+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130490368 unmapped: 57892864 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 228 handle_osd_map epochs [229,229], i have 228, src has [1,229]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 229 ms_handle_reset con 0x55a1f1886000 session 0x55a1f186f880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:58.177186+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130514944 unmapped: 57868288 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 229 handle_osd_map epochs [230,230], i have 229, src has [1,230]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 230 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4309340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:14:59.177321+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130531328 unmapped: 57851904 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:00.177495+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 230 heartbeat osd_stat(store_statfs(0x4fa344000/0x0/0x4ffc00000, data 0x19eaf80/0x1b44000, compress 0x0/0x0/0x0, omap 0x3c61d, meta 0x3d339e3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 230 handle_osd_map epochs [231,231], i have 230, src has [1,231]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 230 handle_osd_map epochs [231,231], i have 231, src has [1,231]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.040998459s of 11.336905479s, submitted: 166
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130539520 unmapped: 57843712 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:01.177632+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 231 ms_handle_reset con 0x55a1f413e000 session 0x55a1f267e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 130596864 unmapped: 57786368 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1850383 data_alloc: 234881024 data_used: 12065487
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:02.177785+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 231 handle_osd_map epochs [232,232], i have 231, src has [1,232]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4fa5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 232 ms_handle_reset con 0x55a1f413e400 session 0x55a1f4266e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 132849664 unmapped: 55533568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:03.177943+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 232 handle_osd_map epochs [233,233], i have 232, src has [1,233]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 233 ms_handle_reset con 0x55a1f4fa5c00 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 134447104 unmapped: 53936128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 233 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f196ce00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:04.178118+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 134447104 unmapped: 53936128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 233 heartbeat osd_stat(store_statfs(0x4f9dd8000/0x0/0x4ffc00000, data 0x1f48318/0x20a7000, compress 0x0/0x0/0x0, omap 0x3ce12, meta 0x3d331ee), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:05.178276+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 233 handle_osd_map epochs [234,234], i have 233, src has [1,234]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 134496256 unmapped: 53886976 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 234 ms_handle_reset con 0x55a1f1886000 session 0x55a1f267e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:06.178456+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 234 handle_osd_map epochs [235,235], i have 234, src has [1,235]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 234 handle_osd_map epochs [234,235], i have 235, src has [1,235]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 54509568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1912417 data_alloc: 234881024 data_used: 12252879
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:07.178619+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 133873664 unmapped: 54509568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:08.178762+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 235 ms_handle_reset con 0x55a1f187a400 session 0x55a1f20addc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 235 ms_handle_reset con 0x55a1f32df400 session 0x55a1f20ac000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 134152192 unmapped: 54231040 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 235 handle_osd_map epochs [236,236], i have 235, src has [1,236]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 236 heartbeat osd_stat(store_statfs(0x4f9ddb000/0x0/0x4ffc00000, data 0x1f4ba50/0x20ad000, compress 0x0/0x0/0x0, omap 0x3d2a8, meta 0x3d32d58), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:09.179022+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 236 handle_osd_map epochs [237,237], i have 236, src has [1,237]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 237 ms_handle_reset con 0x55a1f413e400 session 0x55a1f45f2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 53182464 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 237 heartbeat osd_stat(store_statfs(0x4f9dbb000/0x0/0x4ffc00000, data 0x1f6d5de/0x20cf000, compress 0x0/0x0/0x0, omap 0x3d613, meta 0x3d329ed), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:10.179196+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135331840 unmapped: 53051392 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 237 handle_osd_map epochs [238,238], i have 237, src has [1,238]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.965320587s of 10.272033691s, submitted: 109
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 ms_handle_reset con 0x55a1f413e800 session 0x55a1f4393a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 ms_handle_reset con 0x55a1f1886000 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 ms_handle_reset con 0x55a1f32df400 session 0x55a1f271f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 ms_handle_reset con 0x55a1f413e400 session 0x55a1f3bc9880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f2734c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413ec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:11.179330+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 238 handle_osd_map epochs [239,239], i have 238, src has [1,239]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 ms_handle_reset con 0x55a1f413ec00 session 0x55a1f45f21c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 ms_handle_reset con 0x55a1f1886000 session 0x55a1f4320e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 ms_handle_reset con 0x55a1f32df400 session 0x55a1f267e8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 ms_handle_reset con 0x55a1f413e400 session 0x55a1f4012540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 ms_handle_reset con 0x55a1f413f000 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135544832 unmapped: 52838400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1987393 data_alloc: 234881024 data_used: 12252879
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:12.179634+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 239 handle_osd_map epochs [239,240], i have 239, src has [1,240]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135176192 unmapped: 53207040 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 240 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 240 heartbeat osd_stat(store_statfs(0x4f94d9000/0x0/0x4ffc00000, data 0x2849859/0x29af000, compress 0x0/0x0/0x0, omap 0x3df5f, meta 0x3d320a1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:13.179822+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135200768 unmapped: 53182464 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:14.180571+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 240 handle_osd_map epochs [241,241], i have 240, src has [1,241]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 53010432 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 241 ms_handle_reset con 0x55a1f32df400 session 0x55a1f39d3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 241 heartbeat osd_stat(store_statfs(0x4f94d2000/0x0/0x4ffc00000, data 0x28553d7/0x29ba000, compress 0x0/0x0/0x0, omap 0x3e665, meta 0x3d3199b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:15.181121+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 241 ms_handle_reset con 0x55a1f413e400 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135372800 unmapped: 53010432 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 241 handle_osd_map epochs [242,242], i have 241, src has [1,242]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 241 handle_osd_map epochs [241,242], i have 242, src has [1,242]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:16.181301+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 242 handle_osd_map epochs [243,243], i have 242, src has [1,243]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 243 ms_handle_reset con 0x55a1f413f000 session 0x55a1f3e31500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135413760 unmapped: 52969472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1997012 data_alloc: 234881024 data_used: 12255587
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 243 ms_handle_reset con 0x55a1f413f400 session 0x55a1f4013340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 243 heartbeat osd_stat(store_statfs(0x4f94c6000/0x0/0x4ffc00000, data 0x285a7b5/0x29c2000, compress 0x0/0x0/0x0, omap 0x3ee80, meta 0x3d31180), peers [1,2] op hist [0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:17.181416+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 243 handle_osd_map epochs [244,244], i have 243, src has [1,244]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 244 ms_handle_reset con 0x55a1f1886000 session 0x55a1f39d2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1886000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 244 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41e0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 135512064 unmapped: 52871168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:18.181544+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 139337728 unmapped: 49045504 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 244 heartbeat osd_stat(store_statfs(0x4f94c0000/0x0/0x4ffc00000, data 0x285c3b5/0x29c6000, compress 0x0/0x0/0x0, omap 0x3f039, meta 0x3d30fc7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:19.181859+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 244 handle_osd_map epochs [244,245], i have 244, src has [1,245]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 244 handle_osd_map epochs [245,245], i have 245, src has [1,245]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 140886016 unmapped: 47497216 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 245 ms_handle_reset con 0x55a1f413e400 session 0x55a1f41e1180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 245 ms_handle_reset con 0x55a1f413f000 session 0x55a1f422e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:20.182094+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 140910592 unmapped: 47472640 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.499300003s of 10.378446579s, submitted: 354
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 245 handle_osd_map epochs [245,246], i have 245, src has [1,246]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:21.182275+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 246 ms_handle_reset con 0x55a1f413f400 session 0x55a1f41cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413fc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 246 ms_handle_reset con 0x55a1f413fc00 session 0x55a1f422ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 47333376 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2050816 data_alloc: 234881024 data_used: 20294881
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:22.182443+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 47333376 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:23.182619+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 246 heartbeat osd_stat(store_statfs(0x4f94ad000/0x0/0x4ffc00000, data 0x2862ae9/0x29cb000, compress 0x0/0x0/0x0, omap 0x3fd61, meta 0x3d4029f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 47333376 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:24.182797+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 47333376 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:25.182999+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141049856 unmapped: 47333376 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 246 handle_osd_map epochs [247,247], i have 246, src has [1,247]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:26.183113+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141058048 unmapped: 47325184 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2052838 data_alloc: 234881024 data_used: 20294881
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413e400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 247 ms_handle_reset con 0x55a1f413e400 session 0x55a1f41e1a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:27.183244+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 247 handle_osd_map epochs [248,248], i have 247, src has [1,248]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 248 ms_handle_reset con 0x55a1f32df400 session 0x55a1f43bac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141090816 unmapped: 47292416 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:28.183416+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 248 heartbeat osd_stat(store_statfs(0x4f831b000/0x0/0x4ffc00000, data 0x2864578/0x29cf000, compress 0x0/0x0/0x0, omap 0x3fc4f, meta 0x4ed03b1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141107200 unmapped: 47276032 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 248 ms_handle_reset con 0x55a1f413f400 session 0x55a1f2605500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:29.183533+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 248 handle_osd_map epochs [249,249], i have 248, src has [1,249]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149381120 unmapped: 39002112 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413fc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4619800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 249 ms_handle_reset con 0x55a1f413fc00 session 0x55a1f422f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:30.183675+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 249 heartbeat osd_stat(store_statfs(0x4f79c9000/0x0/0x4ffc00000, data 0x3358f3b/0x330a000, compress 0x0/0x0/0x0, omap 0x4045f, meta 0x4ecfba1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 249 handle_osd_map epochs [250,250], i have 249, src has [1,250]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f4619800 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149602304 unmapped: 38780928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4231dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f413f000 session 0x55a1f196da40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:31.183832+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.337150574s of 10.676717758s, submitted: 204
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f5434800 session 0x55a1f2604fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 38764544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2160076 data_alloc: 234881024 data_used: 21385555
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f5435800 session 0x55a1f4308c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f5435000 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f32df400 session 0x55a1f26ff180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:32.183983+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 ms_handle_reset con 0x55a1f413f000 session 0x55a1f2605dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149618688 unmapped: 38764544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:33.184196+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149626880 unmapped: 38756352 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 250 handle_osd_map epochs [250,251], i have 250, src has [1,251]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5434800 session 0x55a1f4231340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5435800 session 0x55a1f4267dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:34.184407+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 38739968 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:35.184651+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32e1000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 38739968 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f32e1000 session 0x55a1f343aa80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 heartbeat osd_stat(store_statfs(0x4f79d6000/0x0/0x4ffc00000, data 0x33616c7/0x3314000, compress 0x0/0x0/0x0, omap 0x410f7, meta 0x4ecef09), peers [1,2] op hist [0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:36.184797+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149643264 unmapped: 38739968 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2168755 data_alloc: 234881024 data_used: 21386140
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f413f000 session 0x55a1f39d3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5434800 session 0x55a1f41e1a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:37.184966+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149667840 unmapped: 38715392 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:38.185105+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5435800 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f61c8c00 session 0x55a1f45f3c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 38682624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:39.185250+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 heartbeat osd_stat(store_statfs(0x4f79d9000/0x0/0x4ffc00000, data 0x3361665/0x3313000, compress 0x0/0x0/0x0, omap 0x45ac4, meta 0x4eca53c), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f413f000 session 0x55a1f41d08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41cc540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 38682624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5434800 session 0x55a1f4221c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:40.185398+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f5435800 session 0x55a1f41ccfc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149700608 unmapped: 38682624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4739800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f4739800 session 0x55a1f41cc8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f4542400 session 0x55a1f186f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f413f000 session 0x55a1f41e1500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 ms_handle_reset con 0x55a1f32df400 session 0x55a1f2734a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 251 handle_osd_map epochs [252,252], i have 251, src has [1,252]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:41.185524+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4739800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149733376 unmapped: 38649856 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2171064 data_alloc: 234881024 data_used: 21386652
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:42.185618+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 252 heartbeat osd_stat(store_statfs(0x4f79d4000/0x0/0x4ffc00000, data 0x33630e3/0x3316000, compress 0x0/0x0/0x0, omap 0x46121, meta 0x4ec9edf), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.357239723s of 10.613820076s, submitted: 140
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149782528 unmapped: 38600704 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 252 handle_osd_map epochs [252,253], i have 252, src has [1,253]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:43.185820+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 ms_handle_reset con 0x55a1f5435800 session 0x55a1f41cdc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 heartbeat osd_stat(store_statfs(0x4f79d5000/0x0/0x4ffc00000, data 0x3363081/0x3315000, compress 0x0/0x0/0x0, omap 0x46166, meta 0x4ec9e9a), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 ms_handle_reset con 0x55a1f4543400 session 0x55a1f19c76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:44.185959+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:45.186092+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 heartbeat osd_stat(store_statfs(0x4f79d3000/0x0/0x4ffc00000, data 0x3364c61/0x3317000, compress 0x0/0x0/0x0, omap 0x461af, meta 0x4ec9e51), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:46.186249+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2172888 data_alloc: 234881024 data_used: 21540236
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 heartbeat osd_stat(store_statfs(0x4f79d3000/0x0/0x4ffc00000, data 0x3364c61/0x3317000, compress 0x0/0x0/0x0, omap 0x461af, meta 0x4ec9e51), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:47.186377+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 heartbeat osd_stat(store_statfs(0x4f79d3000/0x0/0x4ffc00000, data 0x3364c61/0x3317000, compress 0x0/0x0/0x0, omap 0x461af, meta 0x4ec9e51), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:48.186510+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 253 handle_osd_map epochs [254,254], i have 253, src has [1,254]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 254 ms_handle_reset con 0x55a1f4543400 session 0x55a1f271e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:49.186649+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 254 heartbeat osd_stat(store_statfs(0x4f79d0000/0x0/0x4ffc00000, data 0x3366819/0x331a000, compress 0x0/0x0/0x0, omap 0x462e0, meta 0x4ec9d20), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:50.186829+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 149839872 unmapped: 38543360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:51.186956+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 254 handle_osd_map epochs [255,255], i have 254, src has [1,255]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 255 ms_handle_reset con 0x55a1f32df400 session 0x55a1f3bc8fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 255 ms_handle_reset con 0x55a1f413f000 session 0x55a1f4392540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 37478400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2178436 data_alloc: 234881024 data_used: 21540236
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f79cd000/0x0/0x4ffc00000, data 0x3368298/0x331d000, compress 0x0/0x0/0x0, omap 0x46673, meta 0x4ec998d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:52.187091+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 255 heartbeat osd_stat(store_statfs(0x4f79cd000/0x0/0x4ffc00000, data 0x3368298/0x331d000, compress 0x0/0x0/0x0, omap 0x46673, meta 0x4ec998d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 150904832 unmapped: 37478400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:53.187297+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 255 handle_osd_map epochs [256,256], i have 255, src has [1,256]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.962975502s of 11.056819916s, submitted: 60
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 256 ms_handle_reset con 0x55a1f4542400 session 0x55a1f196c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 150929408 unmapped: 37453824 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:54.187428+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 256 handle_osd_map epochs [257,257], i have 256, src has [1,257]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f5435800 session 0x55a1f2734540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 157679616 unmapped: 30703616 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:55.187565+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 157794304 unmapped: 30588928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 heartbeat osd_stat(store_statfs(0x4f7443000/0x0/0x4ffc00000, data 0x38eca50/0x38a5000, compress 0x0/0x0/0x0, omap 0x47165, meta 0x4ec8e9b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f5435800 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:56.187718+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f413f000 session 0x55a1f186f880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f32df400 session 0x55a1f26056c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f4543400 session 0x55a1f41cca80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 ms_handle_reset con 0x55a1f4542400 session 0x55a1f41cc000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153788416 unmapped: 34594816 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2235558 data_alloc: 234881024 data_used: 24085388
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:57.187954+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 257 handle_osd_map epochs [258,258], i have 257, src has [1,258]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 258 ms_handle_reset con 0x55a1f413f000 session 0x55a1f4012380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 258 ms_handle_reset con 0x55a1f4542400 session 0x55a1f422e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153862144 unmapped: 34521088 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:58.188323+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 258 handle_osd_map epochs [259,259], i have 258, src has [1,259]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 ms_handle_reset con 0x55a1f32df400 session 0x55a1f43a7dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 ms_handle_reset con 0x55a1f4543400 session 0x55a1f3e31500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5435800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 ms_handle_reset con 0x55a1f5435800 session 0x55a1f267e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 heartbeat osd_stat(store_statfs(0x4f7439000/0x0/0x4ffc00000, data 0x38f02d6/0x38b1000, compress 0x0/0x0/0x0, omap 0x4795d, meta 0x4ec86a3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153927680 unmapped: 34455552 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:15:59.188460+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 heartbeat osd_stat(store_statfs(0x4f7439000/0x0/0x4ffc00000, data 0x38f02d6/0x38b1000, compress 0x0/0x0/0x0, omap 0x4795d, meta 0x4ec86a3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 259 handle_osd_map epochs [259,260], i have 259, src has [1,260]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 260 ms_handle_reset con 0x55a1f32df400 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 260 ms_handle_reset con 0x55a1f413f000 session 0x55a1f24fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153952256 unmapped: 34430976 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:00.188625+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 260 ms_handle_reset con 0x55a1f4542400 session 0x55a1f267e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154558464 unmapped: 33824768 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:01.188818+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 260 handle_osd_map epochs [262,262], i have 260, src has [1,262]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 260 handle_osd_map epochs [261,262], i have 260, src has [1,262]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 ms_handle_reset con 0x55a1f413f800 session 0x55a1f42316c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 ms_handle_reset con 0x55a1f1886000 session 0x55a1f422e8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2262022 data_alloc: 234881024 data_used: 24486928
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154148864 unmapped: 34234368 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 heartbeat osd_stat(store_statfs(0x4f73c9000/0x0/0x4ffc00000, data 0x395fe60/0x391f000, compress 0x0/0x0/0x0, omap 0x48144, meta 0x4ec7ebc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:02.188922+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41d01c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 ms_handle_reset con 0x55a1f413f000 session 0x55a1f4013880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 heartbeat osd_stat(store_statfs(0x4f85c9000/0x0/0x4ffc00000, data 0x27604bf/0x2721000, compress 0x0/0x0/0x0, omap 0x484d2, meta 0x4ec7b2e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 ms_handle_reset con 0x55a1f413f800 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 148209664 unmapped: 40173568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 heartbeat osd_stat(store_statfs(0x4f85c9000/0x0/0x4ffc00000, data 0x27604bf/0x2721000, compress 0x0/0x0/0x0, omap 0x484d2, meta 0x4ec7b2e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:03.189129+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 148209664 unmapped: 40173568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:04.189338+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.884673119s of 11.254579544s, submitted: 166
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 heartbeat osd_stat(store_statfs(0x4f85c9000/0x0/0x4ffc00000, data 0x27604bf/0x2721000, compress 0x0/0x0/0x0, omap 0x484d2, meta 0x4ec7b2e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 40591360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:05.189473+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 262 handle_osd_map epochs [262,263], i have 262, src has [1,263]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 263 ms_handle_reset con 0x55a1f4542400 session 0x55a1f43bac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147791872 unmapped: 40591360 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:06.189778+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 263 handle_osd_map epochs [264,264], i have 263, src has [1,264]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2105878 data_alloc: 234881024 data_used: 15348752
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147808256 unmapped: 40574976 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 264 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:07.189939+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 264 handle_osd_map epochs [265,265], i have 264, src has [1,265]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147816448 unmapped: 40566784 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 265 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4320c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 265 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41cc1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 265 ms_handle_reset con 0x55a1f413f000 session 0x55a1f43bb6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:08.190173+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147832832 unmapped: 40550400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:09.190509+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 265 ms_handle_reset con 0x55a1f413f800 session 0x55a1f43a6a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4542400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 40542208 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 265 handle_osd_map epochs [265,266], i have 265, src has [1,266]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 ms_handle_reset con 0x55a1f4542400 session 0x55a1f45f2c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:10.190958+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 ms_handle_reset con 0x55a1f32df400 session 0x55a1f2734fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 ms_handle_reset con 0x55a1f413f000 session 0x55a1f20addc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f85bd000/0x0/0x4ffc00000, data 0x27672e6/0x272d000, compress 0x0/0x0/0x0, omap 0x494e6, meta 0x4ec6b1a), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147841024 unmapped: 40542208 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 ms_handle_reset con 0x55a1f413f800 session 0x55a1f26048c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:11.191100+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 heartbeat osd_stat(store_statfs(0x4f85bd000/0x0/0x4ffc00000, data 0x27672e6/0x272d000, compress 0x0/0x0/0x0, omap 0x494e6, meta 0x4ec6b1a), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 266 handle_osd_map epochs [267,267], i have 266, src has [1,267]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f4543400 session 0x55a1f19c6540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1821800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f1821800 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2115828 data_alloc: 234881024 data_used: 15349024
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147759104 unmapped: 40624128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:12.191598+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147783680 unmapped: 40599552 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:13.191783+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147783680 unmapped: 40599552 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:14.191937+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41cd500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f413f800 session 0x55a1f4013340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f413f000 session 0x55a1f20ad180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f4543400 session 0x55a1f43ba000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.595637321s of 10.014424324s, submitted: 174
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 heartbeat osd_stat(store_statfs(0x4f7d9b000/0x0/0x4ffc00000, data 0x2f86f53/0x2f4f000, compress 0x0/0x0/0x0, omap 0x49e58, meta 0x4ec61a8), peers [1,2] op hist [0,0,0,0,0,4,0,6])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 41033728 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f386f400 session 0x55a1f422f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 ms_handle_reset con 0x55a1f386f400 session 0x55a1f26056c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:15.192101+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147349504 unmapped: 41033728 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:16.192227+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 267 handle_osd_map epochs [268,268], i have 267, src has [1,268]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2188141 data_alloc: 234881024 data_used: 15349024
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 ms_handle_reset con 0x55a1f4739800 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 ms_handle_reset con 0x55a1f5434800 session 0x55a1f3bc9180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147267584 unmapped: 41115648 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:17.192369+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 ms_handle_reset con 0x55a1f32df400 session 0x55a1f45f2c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 ms_handle_reset con 0x55a1f413f800 session 0x55a1f3e30c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 heartbeat osd_stat(store_statfs(0x4f7b15000/0x0/0x4ffc00000, data 0x320ba43/0x31d5000, compress 0x0/0x0/0x0, omap 0x4a211, meta 0x4ec5def), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147275776 unmapped: 41107456 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:18.192931+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 268 handle_osd_map epochs [268,269], i have 268, src has [1,269]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 269 ms_handle_reset con 0x55a1f32df400 session 0x55a1f19c6540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147849216 unmapped: 40534016 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 269 ms_handle_reset con 0x55a1f413f800 session 0x55a1f43a7500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:19.193135+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 269 handle_osd_map epochs [269,270], i have 269, src has [1,270]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4739800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f386f400 session 0x55a1f196c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f4739800 session 0x55a1f41e0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f413f000 session 0x55a1f45f3c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f5434800 session 0x55a1f271f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147890176 unmapped: 40493056 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:20.193322+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f32df400 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 ms_handle_reset con 0x55a1f386f400 session 0x55a1f4230700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147906560 unmapped: 40476672 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:21.193571+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 270 handle_osd_map epochs [271,271], i have 270, src has [1,271]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f413f800 session 0x55a1f4013340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f1842000 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4231180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2228400 data_alloc: 234881024 data_used: 13432014
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 148963328 unmapped: 39419904 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f413f800 session 0x55a1f43bb6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:22.193720+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f386f400 session 0x55a1f4266c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4739800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f5434800 session 0x55a1f42208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 heartbeat osd_stat(store_statfs(0x4f69ac000/0x0/0x4ffc00000, data 0x41b0dce/0x4340000, compress 0x0/0x0/0x0, omap 0x4ada0, meta 0x4ec5260), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 148226048 unmapped: 40157184 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:23.194091+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 ms_handle_reset con 0x55a1f4739800 session 0x55a1f43a6a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 271 handle_osd_map epochs [272,272], i have 271, src has [1,272]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 ms_handle_reset con 0x55a1f1843000 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 ms_handle_reset con 0x55a1f386f400 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 ms_handle_reset con 0x55a1f32df400 session 0x55a1f271e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 ms_handle_reset con 0x55a1f413f800 session 0x55a1f43bb180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 46972928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:24.194246+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5434800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 46972928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:25.194546+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 ms_handle_reset con 0x55a1f5434800 session 0x55a1f4361340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 272 handle_osd_map epochs [273,273], i have 272, src has [1,273]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.141729355s of 10.887506485s, submitted: 183
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 273 ms_handle_reset con 0x55a1f1843000 session 0x55a1f422f880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 273 heartbeat osd_stat(store_statfs(0x4f76bb000/0x0/0x4ffc00000, data 0x33e3986/0x3574000, compress 0x0/0x0/0x0, omap 0x4b3d5, meta 0x4ec4c2b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141410304 unmapped: 46972928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:26.194726+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 273 handle_osd_map epochs [274,274], i have 273, src has [1,274]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 273 handle_osd_map epochs [273,274], i have 274, src has [1,274]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 274 ms_handle_reset con 0x55a1f386f400 session 0x55a1f43bac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 274 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4360fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2018376 data_alloc: 218103808 data_used: 4756174
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141467648 unmapped: 46915584 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 274 ms_handle_reset con 0x55a1f413f800 session 0x55a1f4309340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:27.194864+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 274 handle_osd_map epochs [275,275], i have 274, src has [1,275]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141484032 unmapped: 46899200 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:28.195162+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 275 handle_osd_map epochs [276,276], i have 275, src has [1,276]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 ms_handle_reset con 0x55a1f4543400 session 0x55a1f271e380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4308000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 ms_handle_reset con 0x55a1f32df400 session 0x55a1f43616c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 46784512 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:29.195376+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 ms_handle_reset con 0x55a1f386f400 session 0x55a1f43ba1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 ms_handle_reset con 0x55a1f413f800 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141598720 unmapped: 46784512 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:30.195634+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 276 handle_osd_map epochs [276,277], i have 276, src has [1,277]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 277 heartbeat osd_stat(store_statfs(0x4f998c000/0x0/0x4ffc00000, data 0x11cca85/0x135d000, compress 0x0/0x0/0x0, omap 0x4dfcb, meta 0x4ec2035), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141606912 unmapped: 46776320 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:31.195788+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 277 handle_osd_map epochs [277,278], i have 277, src has [1,278]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1966740 data_alloc: 218103808 data_used: 4760741
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 46759936 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:32.196112+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141623296 unmapped: 46759936 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4735800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:33.196336+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 ms_handle_reset con 0x55a1f4735800 session 0x55a1f267f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4320700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141631488 unmapped: 46751744 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f32df400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:34.196542+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 handle_osd_map epochs [279,279], i have 278, src has [1,279]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 heartbeat osd_stat(store_statfs(0x4f9986000/0x0/0x4ffc00000, data 0x11d024a/0x1364000, compress 0x0/0x0/0x0, omap 0x4ea98, meta 0x4ec1568), peers [1,2] op hist [0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 278 handle_osd_map epochs [279,279], i have 279, src has [1,279]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 279 ms_handle_reset con 0x55a1f413f800 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 46727168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:35.196690+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 279 handle_osd_map epochs [280,280], i have 279, src has [1,280]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 279 handle_osd_map epochs [279,280], i have 280, src has [1,280]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.686794281s of 10.167591095s, submitted: 184
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 280 ms_handle_reset con 0x55a1f386f400 session 0x55a1f4308700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 280 ms_handle_reset con 0x55a1f32df400 session 0x55a1f4221340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 46727168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:36.196856+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 280 handle_osd_map epochs [280,281], i have 280, src has [1,281]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f1843000 session 0x55a1f43a6700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1981310 data_alloc: 218103808 data_used: 4761452
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 46727168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:37.197124+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 46727168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 heartbeat osd_stat(store_statfs(0x4f9977000/0x0/0x4ffc00000, data 0x11d54ab/0x136f000, compress 0x0/0x0/0x0, omap 0x4f498, meta 0x4ec0b68), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:38.197278+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f413f800 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141656064 unmapped: 46727168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f386f400 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:39.197420+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f3fe0000 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3faa000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f3faa000 session 0x55a1f45f2c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 ms_handle_reset con 0x55a1f1843000 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 281 handle_osd_map epochs [282,282], i have 281, src has [1,282]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f386f400 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f3fe0000 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f413f800 session 0x55a1f45f3c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f8d76000 session 0x55a1f4013180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f4543400 session 0x55a1f41d0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141885440 unmapped: 46497792 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:40.197583+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141885440 unmapped: 46497792 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:41.197740+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 ms_handle_reset con 0x55a1f1843000 session 0x55a1f43a6a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 282 handle_osd_map epochs [283,283], i have 282, src has [1,283]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 283 ms_handle_reset con 0x55a1f386f400 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2070788 data_alloc: 218103808 data_used: 4761452
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141803520 unmapped: 46579712 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:42.197861+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 283 handle_osd_map epochs [283,284], i have 283, src has [1,284]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 284 ms_handle_reset con 0x55a1f3fe0000 session 0x55a1f4230a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141860864 unmapped: 46522368 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:43.198014+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f413f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 284 ms_handle_reset con 0x55a1f413f800 session 0x55a1f42308c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3b03c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 ms_handle_reset con 0x55a1f386f400 session 0x55a1f41d01c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 heartbeat osd_stat(store_statfs(0x4f8aec000/0x0/0x4ffc00000, data 0x205e45f/0x21fe000, compress 0x0/0x0/0x0, omap 0x50cf9, meta 0x4ebf307), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 46505984 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:44.198171+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 ms_handle_reset con 0x55a1f3fe0000 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 141877248 unmapped: 46505984 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:45.198424+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d77800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 ms_handle_reset con 0x55a1f8d77800 session 0x55a1f45f21c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.818905830s of 10.202585220s, submitted: 175
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d77400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 286 ms_handle_reset con 0x55a1f8d77400 session 0x55a1efdde380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142147584 unmapped: 46235648 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:46.198548+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 287 ms_handle_reset con 0x55a1f1843000 session 0x55a1f271e380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 287 ms_handle_reset con 0x55a1f386f400 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 287 ms_handle_reset con 0x55a1f4543400 session 0x55a1f196d500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2086936 data_alloc: 218103808 data_used: 4763263
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142155776 unmapped: 46227456 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:47.198714+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142155776 unmapped: 46227456 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:48.198954+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d77800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 287 ms_handle_reset con 0x55a1f8d77800 session 0x55a1f4320e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 287 handle_osd_map epochs [287,288], i have 287, src has [1,288]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 288 ms_handle_reset con 0x55a1f8d76400 session 0x55a1f41e16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 288 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4321dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142155776 unmapped: 46227456 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:49.199108+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 288 ms_handle_reset con 0x55a1f4543400 session 0x55a1f422f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 ms_handle_reset con 0x55a1f386f400 session 0x55a1f24fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f8add000/0x0/0x4ffc00000, data 0x2063ddb/0x220b000, compress 0x0/0x0/0x0, omap 0x51920, meta 0x4ebe6e0), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d77800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 ms_handle_reset con 0x55a1f8d77800 session 0x55a1f2604fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 ms_handle_reset con 0x55a1f3fe0000 session 0x55a1f41e0700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142172160 unmapped: 46211072 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:50.199271+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4230a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 ms_handle_reset con 0x55a1f386f400 session 0x55a1f43a6a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 heartbeat osd_stat(store_statfs(0x4f8ab5000/0x0/0x4ffc00000, data 0x2089eba/0x2235000, compress 0x0/0x0/0x0, omap 0x51d77, meta 0x4ebe289), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:51.199400+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 142327808 unmapped: 46055424 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 289 handle_osd_map epochs [289,290], i have 289, src has [1,290]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d77800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f2604a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f757e000 session 0x55a1f41d1180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7666000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f7666000 session 0x55a1f43ba1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f3e316c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4013500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f1843000 session 0x55a1f41d0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f386f400 session 0x55a1f3b03c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f757e000 session 0x55a1f4013180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4012c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f386f400 session 0x55a1f42216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2169666 data_alloc: 218103808 data_used: 4764493
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f4543400 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 heartbeat osd_stat(store_statfs(0x4f8ab1000/0x0/0x4ffc00000, data 0x208b57c/0x2235000, compress 0x0/0x0/0x0, omap 0x52312, meta 0x4ebdcee), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:52.199533+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 143294464 unmapped: 45088768 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:53.199705+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147554304 unmapped: 40828928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f4392000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 291 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4361c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:54.199828+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147562496 unmapped: 40820736 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:55.199957+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147562496 unmapped: 40820736 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.977007866s of 10.211266518s, submitted: 171
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 292 ms_handle_reset con 0x55a1f386f400 session 0x55a1f3e30700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 292 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f3b03180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 292 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:56.200098+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147595264 unmapped: 40787968 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 293 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f39d2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2277032 data_alloc: 234881024 data_used: 19893215
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:57.200218+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 147963904 unmapped: 40419328 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 293 heartbeat osd_stat(store_statfs(0x4f8163000/0x0/0x4ffc00000, data 0x29d6315/0x2b87000, compress 0x0/0x0/0x0, omap 0x52abb, meta 0x4ebd545), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f4266380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f42316c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f4543400 session 0x55a1f271ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7666400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f7666400 session 0x55a1f4393a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:58.200333+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152354816 unmapped: 36028416 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 heartbeat osd_stat(store_statfs(0x4f8134000/0x0/0x4ffc00000, data 0x2a01cf3/0x2bb3000, compress 0x0/0x0/0x0, omap 0x52d7d, meta 0x4ebd283), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7666800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f7666800 session 0x55a1f41d0380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:16:59.200448+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159539200 unmapped: 28844032 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:00.200584+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159539200 unmapped: 28844032 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 heartbeat osd_stat(store_statfs(0x4f812e000/0x0/0x4ffc00000, data 0x2a035fa/0x2bb6000, compress 0x0/0x0/0x0, omap 0x5390c, meta 0x4ebc6f4), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4267180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7666400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:01.200724+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159571968 unmapped: 28811264 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 294 handle_osd_map epochs [294,295], i have 294, src has [1,295]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 295 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f271e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2339800 data_alloc: 251658240 data_used: 29450336
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:02.200861+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 159039488 unmapped: 29343744 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 296 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f42661c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7668c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 296 ms_handle_reset con 0x55a1f7668c00 session 0x55a1f4231180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 296 ms_handle_reset con 0x55a1f7666400 session 0x55a1f4266380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 296 heartbeat osd_stat(store_statfs(0x4f8129000/0x0/0x4ffc00000, data 0x2a06e76/0x2bbf000, compress 0x0/0x0/0x0, omap 0x53f19, meta 0x4ebc0e7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:03.201009+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162144256 unmapped: 26238976 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:04.201088+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168493056 unmapped: 19890176 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4543400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:05.201171+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168493056 unmapped: 19890176 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 ms_handle_reset con 0x55a1f4543400 session 0x55a1f4320e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7668c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.862807751s of 10.094451904s, submitted: 202
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:06.201297+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169058304 unmapped: 19324928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 ms_handle_reset con 0x55a1f7668c00 session 0x55a1f4309c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2418580 data_alloc: 251658240 data_used: 30651390
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:07.201443+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169066496 unmapped: 19316736 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acbc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 ms_handle_reset con 0x55a1f7acbc00 session 0x55a1f41d0380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:08.201584+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169091072 unmapped: 19292160 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 heartbeat osd_stat(store_statfs(0x4f67df000/0x0/0x4ffc00000, data 0x31a8ea7/0x3362000, compress 0x0/0x0/0x0, omap 0x5416f, meta 0x605be91), peers [1,2] op hist [0,0,0,0,2,6])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:09.201848+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 19185664 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:10.202006+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169197568 unmapped: 19185664 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:11.203467+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f8d76800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172163072 unmapped: 16220160 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 ms_handle_reset con 0x55a1f4737c00 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 298 ms_handle_reset con 0x55a1f4736000 session 0x55a1f2735a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 298 heartbeat osd_stat(store_statfs(0x4f6062000/0x0/0x4ffc00000, data 0x3923b3f/0x3ae0000, compress 0x0/0x0/0x0, omap 0x548ae, meta 0x605b752), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2488235 data_alloc: 251658240 data_used: 32393230
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:12.203627+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 173056000 unmapped: 15327232 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f4736800 session 0x55a1f4012fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f8d77800 session 0x55a1f26fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f5411000 session 0x55a1f41e1a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f4736000 session 0x55a1f4221340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f8d76800 session 0x55a1f3e30a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 ms_handle_reset con 0x55a1f4736000 session 0x55a1f41e08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:13.204631+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172826624 unmapped: 15556608 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:14.204869+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172834816 unmapped: 15548416 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 299 handle_osd_map epochs [299,300], i have 299, src has [1,300]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 ms_handle_reset con 0x55a1f4736800 session 0x55a1f4231a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 ms_handle_reset con 0x55a1f5411000 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:15.205156+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 16998400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 ms_handle_reset con 0x55a1f757f800 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:16.205290+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171384832 unmapped: 16998400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.310671806s of 10.895852089s, submitted: 197
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acb400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 ms_handle_reset con 0x55a1f7acb400 session 0x55a1f4266c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2492461 data_alloc: 251658240 data_used: 32530395
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:17.205658+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 heartbeat osd_stat(store_statfs(0x4f6078000/0x0/0x4ffc00000, data 0x39137ef/0x3ad2000, compress 0x0/0x0/0x0, omap 0x551c7, meta 0x605ae39), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 ms_handle_reset con 0x55a1f4736000 session 0x55a1f3bc9500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171393024 unmapped: 16990208 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 300 handle_osd_map epochs [300,301], i have 301, src has [1,301]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 ms_handle_reset con 0x55a1f4736800 session 0x55a1f271f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f6078000/0x0/0x4ffc00000, data 0x39137ef/0x3ad2000, compress 0x0/0x0/0x0, omap 0x551c7, meta 0x605ae39), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 ms_handle_reset con 0x55a1f5411000 session 0x55a1f4013180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 ms_handle_reset con 0x55a1f7acb800 session 0x55a1f4221c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:18.205799+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171401216 unmapped: 16982016 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:19.205973+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171401216 unmapped: 16982016 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 heartbeat osd_stat(store_statfs(0x4f6075000/0x0/0x4ffc00000, data 0x39153a7/0x3ad5000, compress 0x0/0x0/0x0, omap 0x5566d, meta 0x605a993), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 302 ms_handle_reset con 0x55a1f757f800 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 302 ms_handle_reset con 0x55a1f4736000 session 0x55a1f196c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:20.206154+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 16949248 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:21.206542+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171433984 unmapped: 16949248 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:22.206693+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2496350 data_alloc: 251658240 data_used: 32532276
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 302 handle_osd_map epochs [302,303], i have 302, src has [1,303]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171450368 unmapped: 16932864 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:23.206999+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 303 ms_handle_reset con 0x55a1f5411000 session 0x55a1f3e30c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171458560 unmapped: 16924672 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 ms_handle_reset con 0x55a1f4736800 session 0x55a1f26041c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 ms_handle_reset con 0x55a1f757f800 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 ms_handle_reset con 0x55a1f7acb800 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 heartbeat osd_stat(store_statfs(0x4f6071000/0x0/0x4ffc00000, data 0x391a013/0x3ad9000, compress 0x0/0x0/0x0, omap 0x5687f, meta 0x6059781), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:24.207681+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171507712 unmapped: 16875520 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 ms_handle_reset con 0x55a1f4736000 session 0x55a1f41cce00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 ms_handle_reset con 0x55a1f4736800 session 0x55a1f4231500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:25.207837+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acb000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171524096 unmapped: 16859136 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:26.207970+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171524096 unmapped: 16859136 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:27.208275+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2503585 data_alloc: 251658240 data_used: 32567092
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171556864 unmapped: 16826368 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.659055710s of 10.405235291s, submitted: 145
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:28.208436+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 305 heartbeat osd_stat(store_statfs(0x4f606f000/0x0/0x4ffc00000, data 0x391bbbc/0x3adb000, compress 0x0/0x0/0x0, omap 0x56d6d, meta 0x6059293), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171589632 unmapped: 16793600 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 305 ms_handle_reset con 0x55a1f5411000 session 0x55a1f45f21c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 305 ms_handle_reset con 0x55a1f7acac00 session 0x55a1f4393a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:29.210073+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171614208 unmapped: 16769024 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 306 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f2605c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 306 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:30.210225+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171704320 unmapped: 16678912 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 307 heartbeat osd_stat(store_statfs(0x4f606f000/0x0/0x4ffc00000, data 0x391cd8d/0x3adb000, compress 0x0/0x0/0x0, omap 0x57414, meta 0x6058bec), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:31.210411+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171761664 unmapped: 16621568 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 ms_handle_reset con 0x55a1f4736000 session 0x55a1f4321dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 ms_handle_reset con 0x55a1f4736800 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:32.210566+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f606a000/0x0/0x4ffc00000, data 0x39205d3/0x3ae0000, compress 0x0/0x0/0x0, omap 0x57d28, meta 0x60582d8), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2507565 data_alloc: 251658240 data_used: 32575112
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:33.210807+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:34.210982+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:35.211246+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 heartbeat osd_stat(store_statfs(0x4f606b000/0x0/0x4ffc00000, data 0x39205c3/0x3adf000, compress 0x0/0x0/0x0, omap 0x57c90, meta 0x6058370), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 ms_handle_reset con 0x55a1f5411000 session 0x55a1f43a6700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:36.211412+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 308 handle_osd_map epochs [308,309], i have 309, src has [1,309]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 309 ms_handle_reset con 0x55a1f3bce800 session 0x55a1f422ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:37.211596+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2512983 data_alloc: 251658240 data_used: 32595592
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171859968 unmapped: 16523264 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.787612915s of 10.181936264s, submitted: 147
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 ms_handle_reset con 0x55a1f7acac00 session 0x55a1f4012c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 ms_handle_reset con 0x55a1f3bce800 session 0x55a1efdde380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:38.211718+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171909120 unmapped: 16474112 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:39.211876+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 16588800 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 heartbeat osd_stat(store_statfs(0x4f6064000/0x0/0x4ffc00000, data 0x3923d7b/0x3ae6000, compress 0x0/0x0/0x0, omap 0x5859b, meta 0x6057a65), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:40.212030+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 ms_handle_reset con 0x55a1f4736000 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 ms_handle_reset con 0x55a1f4736800 session 0x55a1f4320e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171794432 unmapped: 16588800 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 311 heartbeat osd_stat(store_statfs(0x4f605f000/0x0/0x4ffc00000, data 0x3925989/0x3aeb000, compress 0x0/0x0/0x0, omap 0x58a4e, meta 0x60575b2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 311 ms_handle_reset con 0x55a1f3bce400 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 311 ms_handle_reset con 0x55a1f3bce800 session 0x55a1f4267880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:41.212245+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171819008 unmapped: 16564224 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 311 handle_osd_map epochs [311,312], i have 311, src has [1,312]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f24fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 ms_handle_reset con 0x55a1f4736000 session 0x55a1f45f3a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 ms_handle_reset con 0x55a1f5411000 session 0x55a1f4221dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f267f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:42.212377+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2536196 data_alloc: 251658240 data_used: 33415465
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171835392 unmapped: 16547840 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 heartbeat osd_stat(store_statfs(0x4f6057000/0x0/0x4ffc00000, data 0x3927ad7/0x3af1000, compress 0x0/0x0/0x0, omap 0x58b7c, meta 0x6057484), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 312 handle_osd_map epochs [312,313], i have 312, src has [1,313]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f26056c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 ms_handle_reset con 0x55a1f3bce800 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 ms_handle_reset con 0x55a1f4736000 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 heartbeat osd_stat(store_statfs(0x4f6052000/0x0/0x4ffc00000, data 0x3929665/0x3af3000, compress 0x0/0x0/0x0, omap 0x58caa, meta 0x6057356), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:43.212528+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 173154304 unmapped: 15228928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:44.212662+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 173170688 unmapped: 15212544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f42308c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 313 handle_osd_map epochs [313,314], i have 313, src has [1,314]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 314 ms_handle_reset con 0x55a1f4736000 session 0x55a1f26041c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:45.212928+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172425216 unmapped: 15958016 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 314 handle_osd_map epochs [315,315], i have 314, src has [1,315]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 315 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f271f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 315 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f41d1dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 315 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f39d3500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:46.213093+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172433408 unmapped: 15949824 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 315 handle_osd_map epochs [315,316], i have 315, src has [1,316]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 316 ms_handle_reset con 0x55a1f3bce800 session 0x55a1f4230540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:47.213229+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 2547743 data_alloc: 251658240 data_used: 33412393
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172466176 unmapped: 15917056 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.892945290s of 10.098443985s, submitted: 66
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 316 handle_osd_map epochs [316,317], i have 316, src has [1,317]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 317 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 317 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f422ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 317 ms_handle_reset con 0x55a1f4736000 session 0x55a1f422f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:48.213363+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 317 heartbeat osd_stat(store_statfs(0x4f6048000/0x0/0x4ffc00000, data 0x393140b/0x3b00000, compress 0x0/0x0/0x0, omap 0x59dca, meta 0x6056236), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172523520 unmapped: 15859712 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 317 handle_osd_map epochs [318,318], i have 317, src has [1,318]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f4221a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:49.213614+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 ms_handle_reset con 0x55a1f4736800 session 0x55a1f41cda40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172613632 unmapped: 15769600 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:50.213764+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172670976 unmapped: 15712256 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:51.213897+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 ms_handle_reset con 0x55a1f1843000 session 0x55a1f2605340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 ms_handle_reset con 0x55a1f386f400 session 0x55a1f186f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 172695552 unmapped: 15687680 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 handle_osd_map epochs [318,319], i have 318, src has [1,319]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 318 handle_osd_map epochs [319,319], i have 319, src has [1,319]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 319 ms_handle_reset con 0x55a1f4736800 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 319 ms_handle_reset con 0x55a1f4736000 session 0x55a1f267efc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:52.214142+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2385540 data_alloc: 234881024 data_used: 21893304
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 22601728 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 319 handle_osd_map epochs [320,320], i have 319, src has [1,320]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 320 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f41d01c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 320 ms_handle_reset con 0x55a1f4736000 session 0x55a1f4393c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 320 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4013880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 320 heartbeat osd_stat(store_statfs(0x4f713c000/0x0/0x4ffc00000, data 0x283bd2c/0x2a0e000, compress 0x0/0x0/0x0, omap 0x5ad1e, meta 0x60552e2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:53.214315+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 165781504 unmapped: 22601728 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:54.214556+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 165789696 unmapped: 22593536 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 320 handle_osd_map epochs [321,321], i have 320, src has [1,321]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 321 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196d500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386f400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 321 ms_handle_reset con 0x55a1f386f400 session 0x55a1f3bc8700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:55.214696+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164749312 unmapped: 23633920 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:56.214872+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164749312 unmapped: 23633920 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 321 heartbeat osd_stat(store_statfs(0x4f7138000/0x0/0x4ffc00000, data 0x283d928/0x2a10000, compress 0x0/0x0/0x0, omap 0x5b6dc, meta 0x6054924), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 321 handle_osd_map epochs [322,322], i have 321, src has [1,322]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 322 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4308700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 322 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:57.215654+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2393307 data_alloc: 234881024 data_used: 21901347
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 23584768 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.777832985s of 10.080501556s, submitted: 238
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 322 handle_osd_map epochs [323,323], i have 322, src has [1,323]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 323 ms_handle_reset con 0x55a1f4736000 session 0x55a1f4267880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:58.215814+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 23584768 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 323 handle_osd_map epochs [323,324], i have 323, src has [1,324]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:17:59.215976+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164798464 unmapped: 23584768 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 324 handle_osd_map epochs [324,325], i have 324, src has [1,325]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 325 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f43bb340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 325 ms_handle_reset con 0x55a1f4736800 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 325 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f45f36c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:00.216106+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164831232 unmapped: 23552000 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 325 heartbeat osd_stat(store_statfs(0x4f7132000/0x0/0x4ffc00000, data 0x284461b/0x2a16000, compress 0x0/0x0/0x0, omap 0x5c42e, meta 0x6053bd2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:01.216303+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164855808 unmapped: 23527424 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 325 handle_osd_map epochs [326,326], i have 325, src has [1,326]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 326 ms_handle_reset con 0x55a1f4736800 session 0x55a1f2734e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 326 ms_handle_reset con 0x55a1f1843000 session 0x55a1f271f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:02.216429+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2401307 data_alloc: 234881024 data_used: 21901331
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164872192 unmapped: 23511040 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 326 handle_osd_map epochs [327,327], i have 326, src has [1,327]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f422f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:03.216695+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164872192 unmapped: 23511040 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 ms_handle_reset con 0x55a1f61c8400 session 0x55a1f42661c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 ms_handle_reset con 0x55a1f4736000 session 0x55a1f26fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:04.217170+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 heartbeat osd_stat(store_statfs(0x4f712c000/0x0/0x4ffc00000, data 0x2847e8b/0x2a1c000, compress 0x0/0x0/0x0, omap 0x5d199, meta 0x6052e67), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164880384 unmapped: 23502848 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:05.217462+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164888576 unmapped: 23494656 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:06.217639+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164888576 unmapped: 23494656 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 handle_osd_map epochs [328,328], i have 327, src has [1,328]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 327 handle_osd_map epochs [327,328], i have 328, src has [1,328]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:07.217836+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2409419 data_alloc: 234881024 data_used: 21897235
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 165953536 unmapped: 22429696 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.120904922s of 10.119972229s, submitted: 196
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:08.218030+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 328 ms_handle_reset con 0x55a1f1843000 session 0x55a1f39d3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166002688 unmapped: 22380544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 328 heartbeat osd_stat(store_statfs(0x4f7129000/0x0/0x4ffc00000, data 0x28ff942/0x2a23000, compress 0x0/0x0/0x0, omap 0x5d92f, meta 0x60526d1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:09.218215+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166002688 unmapped: 22380544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:10.218372+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166002688 unmapped: 22380544 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 328 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f26ff6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 328 handle_osd_map epochs [329,329], i have 328, src has [1,329]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f7acac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:11.218507+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 329 ms_handle_reset con 0x55a1f4736800 session 0x55a1f4012700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166010880 unmapped: 22372352 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 329 handle_osd_map epochs [329,330], i have 329, src has [1,330]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 330 ms_handle_reset con 0x55a1f7acac00 session 0x55a1f42216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 330 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f41cc540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:12.218638+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2431053 data_alloc: 234881024 data_used: 21901445
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 330 heartbeat osd_stat(store_statfs(0x4f711c000/0x0/0x4ffc00000, data 0x290314e/0x2a2c000, compress 0x0/0x0/0x0, omap 0x5cfcc, meta 0x6053034), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166019072 unmapped: 22364160 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 330 ms_handle_reset con 0x55a1f1843000 session 0x55a1f2604a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 330 handle_osd_map epochs [330,331], i have 330, src has [1,331]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 331 ms_handle_reset con 0x55a1f4736000 session 0x55a1f2604c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:13.218939+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166051840 unmapped: 22331392 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 331 handle_osd_map epochs [332,332], i have 331, src has [1,332]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 332 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f4231340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:14.219090+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 332 ms_handle_reset con 0x55a1f4736800 session 0x55a1f24fe380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166084608 unmapped: 22298624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:15.219254+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166084608 unmapped: 22298624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:16.219509+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 332 handle_osd_map epochs [333,333], i have 332, src has [1,333]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 332 handle_osd_map epochs [332,333], i have 333, src has [1,333]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 ms_handle_reset con 0x55a1f1843000 session 0x55a1f41e0700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 ms_handle_reset con 0x55a1f4736800 session 0x55a1f2604fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f41d08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166141952 unmapped: 22241280 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bcec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:17.219721+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2437477 data_alloc: 234881024 data_used: 21901331
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 ms_handle_reset con 0x55a1f3bcec00 session 0x55a1f45f2000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166084608 unmapped: 22298624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4736000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:18.219923+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 heartbeat osd_stat(store_statfs(0x4f7116000/0x0/0x4ffc00000, data 0x290859f/0x2a32000, compress 0x0/0x0/0x0, omap 0x5fb37, meta 0x60504c9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166084608 unmapped: 22298624 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f61c8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 333 handle_osd_map epochs [334,334], i have 333, src has [1,334]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.621878624s of 10.959502220s, submitted: 175
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:19.220100+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166092800 unmapped: 22290432 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 334 ms_handle_reset con 0x55a1f61c8000 session 0x55a1f196da40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:20.220243+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 22257664 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:21.220375+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166125568 unmapped: 22257664 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 334 handle_osd_map epochs [335,335], i have 334, src has [1,335]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:22.220523+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2442945 data_alloc: 234881024 data_used: 21933091
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 22249472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:23.220709+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 335 heartbeat osd_stat(store_statfs(0x4f7112000/0x0/0x4ffc00000, data 0x290bc7e/0x2a38000, compress 0x0/0x0/0x0, omap 0x602ce, meta 0x604fd32), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 22249472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:24.220871+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 22249472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:25.221082+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 335 heartbeat osd_stat(store_statfs(0x4f7112000/0x0/0x4ffc00000, data 0x290bc7e/0x2a38000, compress 0x0/0x0/0x0, omap 0x602ce, meta 0x604fd32), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 22249472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:26.221264+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166133760 unmapped: 22249472 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 335 handle_osd_map epochs [336,336], i have 335, src has [1,336]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:27.221384+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2445719 data_alloc: 234881024 data_used: 21933091
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166141952 unmapped: 22241280 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 336 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4393a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:28.221535+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 336 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4266a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166141952 unmapped: 22241280 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:29.221995+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.077654839s of 10.528940201s, submitted: 61
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 336 heartbeat osd_stat(store_statfs(0x4f710c000/0x0/0x4ffc00000, data 0x290d811/0x2a3e000, compress 0x0/0x0/0x0, omap 0x6083c, meta 0x604f7c4), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166150144 unmapped: 22233088 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 336 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f267f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 336 handle_osd_map epochs [337,337], i have 336, src has [1,337]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 337 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:30.222138+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454b800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 337 ms_handle_reset con 0x55a1f454b800 session 0x55a1f19c7500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 170000384 unmapped: 18382848 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 337 handle_osd_map epochs [337,338], i have 337, src has [1,338]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 ms_handle_reset con 0x55a1f1843000 session 0x55a1f27356c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:31.222289+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 170033152 unmapped: 18350080 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:32.222398+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4267180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f6daa000/0x0/0x4ffc00000, data 0x2c6befb/0x2d9e000, compress 0x0/0x0/0x0, omap 0x6117c, meta 0x604ee84), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2485216 data_alloc: 234881024 data_used: 22761621
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166862848 unmapped: 21520384 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 heartbeat osd_stat(store_statfs(0x4f6daa000/0x0/0x4ffc00000, data 0x2c6befb/0x2d9e000, compress 0x0/0x0/0x0, omap 0x6117c, meta 0x604ee84), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f2735180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:33.222597+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166879232 unmapped: 21504000 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f26fee00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:34.222736+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166879232 unmapped: 21504000 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:35.222865+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166879232 unmapped: 21504000 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:36.223218+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 338 handle_osd_map epochs [339,339], i have 338, src has [1,339]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 339 ms_handle_reset con 0x55a1f473a800 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168017920 unmapped: 20365312 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:37.223421+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 339 handle_osd_map epochs [340,340], i have 339, src has [1,340]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2501049 data_alloc: 234881024 data_used: 22769699
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 ms_handle_reset con 0x55a1f473a800 session 0x55a1f24ff340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168034304 unmapped: 20348928 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:38.223688+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 heartbeat osd_stat(store_statfs(0x4f6cd3000/0x0/0x4ffc00000, data 0x2d424c0/0x2e75000, compress 0x0/0x0/0x0, omap 0x616c1, meta 0x604e93f), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 167206912 unmapped: 21176320 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:39.223857+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.420934677s of 10.011519432s, submitted: 118
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 167215104 unmapped: 21168128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:40.224007+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 heartbeat osd_stat(store_statfs(0x4f6cd7000/0x0/0x4ffc00000, data 0x2d424c0/0x2e75000, compress 0x0/0x0/0x0, omap 0x61838, meta 0x604e7c8), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 167215104 unmapped: 21168128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 heartbeat osd_stat(store_statfs(0x4f6cd7000/0x0/0x4ffc00000, data 0x2d424c0/0x2e75000, compress 0x0/0x0/0x0, omap 0x61838, meta 0x604e7c8), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:41.224143+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 167215104 unmapped: 21168128 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 340 handle_osd_map epochs [341,341], i have 340, src has [1,341]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24ff6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:42.224303+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2500575 data_alloc: 234881024 data_used: 22765603
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168280064 unmapped: 20103168 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f3b02000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:43.224501+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f41e0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168312832 unmapped: 20070400 heap: 188383232 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f41cda40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:44.224657+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f1843000 session 0x55a1f186fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f3b036c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168534016 unmapped: 35069952 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:45.224840+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168534016 unmapped: 35069952 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:46.224985+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 heartbeat osd_stat(store_statfs(0x4f5e4c000/0x0/0x4ffc00000, data 0x3bcc0b0/0x3d00000, compress 0x0/0x0/0x0, omap 0x6174a, meta 0x604e8b6), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 ms_handle_reset con 0x55a1f4736000 session 0x55a1f43bb340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168214528 unmapped: 35389440 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 341 handle_osd_map epochs [342,342], i have 341, src has [1,342]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 342 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f4221c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 342 ms_handle_reset con 0x55a1f473a800 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 342 ms_handle_reset con 0x55a1f473bc00 session 0x55a1f4230540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:47.225170+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2512790 data_alloc: 234881024 data_used: 22896675
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 342 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168222720 unmapped: 35381248 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 342 handle_osd_map epochs [343,343], i have 342, src has [1,343]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 343 ms_handle_reset con 0x55a1f1843000 session 0x55a1f39d3180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:48.225331+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168230912 unmapped: 35373056 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 343 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 343 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:49.225483+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168247296 unmapped: 35356672 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:50.225640+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 343 handle_osd_map epochs [344,344], i have 343, src has [1,344]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.443590164s of 10.878000259s, submitted: 131
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 ms_handle_reset con 0x55a1f1843000 session 0x55a1f2605500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4230700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f24fe380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168247296 unmapped: 35356672 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 heartbeat osd_stat(store_statfs(0x4f6cca000/0x0/0x4ffc00000, data 0x2d47874/0x2e7e000, compress 0x0/0x0/0x0, omap 0x60d40, meta 0x604f2c0), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:51.225777+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 ms_handle_reset con 0x55a1f473bc00 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168247296 unmapped: 35356672 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 344 handle_osd_map epochs [344,345], i have 344, src has [1,345]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:52.225945+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2493166 data_alloc: 234881024 data_used: 22642707
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168271872 unmapped: 35332096 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 345 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:53.226119+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168271872 unmapped: 35332096 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:54.234631+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 345 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x291ce9b/0x2a55000, compress 0x0/0x0/0x0, omap 0x643c9, meta 0x604bc37), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168271872 unmapped: 35332096 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 345 handle_osd_map epochs [346,346], i have 345, src has [1,346]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 346 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:55.234895+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168296448 unmapped: 35307520 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:56.235143+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 346 handle_osd_map epochs [347,347], i have 346, src has [1,347]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168312832 unmapped: 35291136 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 347 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f3b036c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:57.235329+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2486730 data_alloc: 234881024 data_used: 21897235
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168312832 unmapped: 35291136 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 347 ms_handle_reset con 0x55a1f757f800 session 0x55a1f4013c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 347 ms_handle_reset con 0x55a1f7acb000 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 347 handle_osd_map epochs [347,348], i have 347, src has [1,348]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:58.235462+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 348 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168321024 unmapped: 35282944 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:18:59.235600+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 348 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4266a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 348 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f26048c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168321024 unmapped: 35282944 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:00.235855+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 348 heartbeat osd_stat(store_statfs(0x4f70f3000/0x0/0x4ffc00000, data 0x286c27c/0x2a59000, compress 0x0/0x0/0x0, omap 0x64de1, meta 0x604b21f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168337408 unmapped: 35266560 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:01.236193+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168337408 unmapped: 35266560 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:02.236414+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 348 handle_osd_map epochs [349,349], i have 348, src has [1,349]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.850229263s of 12.075550079s, submitted: 92
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2493063 data_alloc: 234881024 data_used: 21897235
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 ms_handle_reset con 0x55a1f757f800 session 0x55a1f43a68c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 35233792 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:03.236642+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x286dd7d/0x2a5d000, compress 0x0/0x0/0x0, omap 0x653be, meta 0x604ac42), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168370176 unmapped: 35233792 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:04.236784+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f454bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 ms_handle_reset con 0x55a1f454bc00 session 0x55a1f43a6fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168378368 unmapped: 35225600 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 heartbeat osd_stat(store_statfs(0x4f70ed000/0x0/0x4ffc00000, data 0x286dd7d/0x2a5d000, compress 0x0/0x0/0x0, omap 0x653be, meta 0x604ac42), peers [1,2] op hist [0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:05.236961+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 handle_osd_map epochs [350,350], i have 349, src has [1,350]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 349 handle_osd_map epochs [349,350], i have 350, src has [1,350]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 47824896 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 350 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3b03c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:06.237181+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 47824896 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:07.237435+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 350 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2266262 data_alloc: 218103808 data_used: 4781157
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 350 heartbeat osd_stat(store_statfs(0x4f8657000/0x0/0x4ffc00000, data 0x124d8a7/0x143c000, compress 0x0/0x0/0x0, omap 0x658aa, meta 0x604a756), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 47824896 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:08.237666+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 47824896 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:09.237874+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155779072 unmapped: 47824896 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:10.238079+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 350 handle_osd_map epochs [351,351], i have 350, src has [1,351]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 ms_handle_reset con 0x55a1f757f800 session 0x55a1f4266c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155787264 unmapped: 47816704 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:11.238275+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 heartbeat osd_stat(store_statfs(0x4f870b000/0x0/0x4ffc00000, data 0x124f497/0x143f000, compress 0x0/0x0/0x0, omap 0x659c3, meta 0x604a63d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 heartbeat osd_stat(store_statfs(0x4f870b000/0x0/0x4ffc00000, data 0x124f497/0x143f000, compress 0x0/0x0/0x0, omap 0x659c3, meta 0x604a63d), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155787264 unmapped: 47816704 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:12.238416+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 ms_handle_reset con 0x55a1f473a800 session 0x55a1f422ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 ms_handle_reset con 0x55a1f1843000 session 0x55a1f26fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2267534 data_alloc: 218103808 data_used: 4785057
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 47808512 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:13.238686+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 heartbeat osd_stat(store_statfs(0x4f870e000/0x0/0x4ffc00000, data 0x124f435/0x143e000, compress 0x0/0x0/0x0, omap 0x65d8e, meta 0x604a272), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4320700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 47808512 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:14.238880+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 47808512 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 351 handle_osd_map epochs [352,352], i have 351, src has [1,352]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.457898140s of 12.687560081s, submitted: 92
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:15.239140+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 352 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155795456 unmapped: 47808512 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 352 handle_osd_map epochs [353,353], i have 352, src has [1,353]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:16.239314+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 ms_handle_reset con 0x55a1f757f800 session 0x55a1f42201c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f4220700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196ce00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4393a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f4308540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155148288 unmapped: 48455680 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:17.239504+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 353 handle_osd_map epochs [354,354], i have 353, src has [1,354]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f757f800 session 0x55a1f4230380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2311860 data_alloc: 218103808 data_used: 4785670
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4730c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f4730c00 session 0x55a1f4221c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155099136 unmapped: 48504832 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:18.239655+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f1843000 session 0x55a1f271f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155099136 unmapped: 48504832 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:19.239825+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 heartbeat osd_stat(store_statfs(0x4f81af000/0x0/0x4ffc00000, data 0x17a8678/0x199b000, compress 0x0/0x0/0x0, omap 0x666d4, meta 0x604992c), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24ffdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155099136 unmapped: 48504832 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:20.240010+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 155123712 unmapped: 48480256 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:21.240207+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f45f2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 ms_handle_reset con 0x55a1f757f800 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154247168 unmapped: 49356800 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:22.240339+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 354 handle_osd_map epochs [355,355], i have 354, src has [1,355]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2337371 data_alloc: 218103808 data_used: 7376564
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154247168 unmapped: 49356800 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:23.240537+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 355 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f24ff880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 355 ms_handle_reset con 0x55a1f3fa4000 session 0x55a1f43936c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154198016 unmapped: 49405952 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:24.240684+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 355 handle_osd_map epochs [355,356], i have 355, src has [1,356]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f4544400 session 0x55a1f41cda40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f1843000 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f3fdb800 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152698880 unmapped: 50905088 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:25.240870+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4230700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 heartbeat osd_stat(store_statfs(0x4f86fa000/0x0/0x4ffc00000, data 0x1257d15/0x1450000, compress 0x0/0x0/0x0, omap 0x67365, meta 0x6048c9b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.973587036s of 10.575058937s, submitted: 97
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f42201c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f3fa4000 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152698880 unmapped: 50905088 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f422f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f757f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:26.241021+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 ms_handle_reset con 0x55a1f757f800 session 0x55a1f41cd180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 356 handle_osd_map epochs [357,357], i have 356, src has [1,357]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 357 ms_handle_reset con 0x55a1f4544400 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 357 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 50847744 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 357 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:27.241255+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2296595 data_alloc: 218103808 data_used: 4786868
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152756224 unmapped: 50847744 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 357 heartbeat osd_stat(store_statfs(0x4f86f8000/0x0/0x4ffc00000, data 0x12598f5/0x1452000, compress 0x0/0x0/0x0, omap 0x68336, meta 0x6047cca), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:28.241381+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa4000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 357 handle_osd_map epochs [358,358], i have 357, src has [1,358]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 358 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f3e30540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 358 ms_handle_reset con 0x55a1f3fa4000 session 0x55a1f4013500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152772608 unmapped: 50831360 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:29.241542+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 358 ms_handle_reset con 0x55a1f1843000 session 0x55a1f3e316c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152780800 unmapped: 50823168 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:30.241747+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 358 handle_osd_map epochs [358,359], i have 358, src has [1,359]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f86f6000/0x0/0x4ffc00000, data 0x125b50f/0x1456000, compress 0x0/0x0/0x0, omap 0x68946, meta 0x60476ba), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f2734e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 ms_handle_reset con 0x55a1f4544400 session 0x55a1f26fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152821760 unmapped: 50782208 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4219000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 ms_handle_reset con 0x55a1f4219000 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:31.241978+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152879104 unmapped: 50724864 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:32.242137+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2309466 data_alloc: 218103808 data_used: 4787551
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152879104 unmapped: 50724864 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:33.242340+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 heartbeat osd_stat(store_statfs(0x4f86f0000/0x0/0x4ffc00000, data 0x125d293/0x145a000, compress 0x0/0x0/0x0, omap 0x6956c, meta 0x6046a94), peers [1,2] op hist [0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 359 handle_osd_map epochs [360,360], i have 359, src has [1,360]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 360 ms_handle_reset con 0x55a1f4738400 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 360 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 360 ms_handle_reset con 0x55a1f4544400 session 0x55a1f422e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152887296 unmapped: 50716672 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:34.242581+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152895488 unmapped: 50708480 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:35.242812+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 360 handle_osd_map epochs [361,361], i have 360, src has [1,361]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4219400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.943981647s of 10.001565933s, submitted: 150
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 361 ms_handle_reset con 0x55a1f1843000 session 0x55a1f24fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 361 ms_handle_reset con 0x55a1f4219400 session 0x55a1f186f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152944640 unmapped: 50659328 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:36.243166+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 361 handle_osd_map epochs [362,362], i have 361, src has [1,362]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 362 heartbeat osd_stat(store_statfs(0x4f86ec000/0x0/0x4ffc00000, data 0x1260a11/0x145e000, compress 0x0/0x0/0x0, omap 0x69fcc, meta 0x6046034), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 362 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f24fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 362 ms_handle_reset con 0x55a1f1843000 session 0x55a1f24ff6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153067520 unmapped: 50536448 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:37.243355+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 362 handle_osd_map epochs [362,363], i have 362, src has [1,363]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f271fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2317394 data_alloc: 218103808 data_used: 4788623
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 ms_handle_reset con 0x55a1f4544400 session 0x55a1f4220fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153100288 unmapped: 50503680 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:38.243522+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153100288 unmapped: 50503680 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 ms_handle_reset con 0x55a1f4738400 session 0x55a1f4267c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 heartbeat osd_stat(store_statfs(0x4f86e8000/0x0/0x4ffc00000, data 0x1264286/0x1464000, compress 0x0/0x0/0x0, omap 0x6ad11, meta 0x60452ef), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:39.243676+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153100288 unmapped: 50503680 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24ff880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:40.243841+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 ms_handle_reset con 0x55a1f4544400 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 363 handle_osd_map epochs [364,364], i have 363, src has [1,364]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153141248 unmapped: 50462720 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 364 ms_handle_reset con 0x55a1f1843000 session 0x55a1efddf180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:41.244146+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 364 heartbeat osd_stat(store_statfs(0x4f86e3000/0x0/0x4ffc00000, data 0x1265e5a/0x1467000, compress 0x0/0x0/0x0, omap 0x6b2c7, meta 0x6044d39), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 364 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f186ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4219800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 50454528 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:42.244352+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 364 handle_osd_map epochs [365,365], i have 364, src has [1,365]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2326582 data_alloc: 218103808 data_used: 4792798
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153149440 unmapped: 50454528 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:43.244534+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f4219800 session 0x55a1f4309180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 50806784 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:44.244780+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f1843000 session 0x55a1f422f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152797184 unmapped: 50806784 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:45.244947+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f4544400 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f4320e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4219c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f4219c00 session 0x55a1f267f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152805376 unmapped: 50798592 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:46.245126+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.566457748s of 10.964907646s, submitted: 233
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196d500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 heartbeat osd_stat(store_statfs(0x4f86e2000/0x0/0x4ffc00000, data 0x1267a68/0x146a000, compress 0x0/0x0/0x0, omap 0x6bdb1, meta 0x604424f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152813568 unmapped: 50790400 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:47.245287+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 365 handle_osd_map epochs [366,366], i have 365, src has [1,366]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4393c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2335023 data_alloc: 218103808 data_used: 4793269
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544400 session 0x55a1f4267c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 152813568 unmapped: 50790400 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:48.245496+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4218000 session 0x55a1f4230700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153894912 unmapped: 49709056 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:49.245748+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f1843000 session 0x55a1f267f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f20addc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 49971200 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:50.245962+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 heartbeat osd_stat(store_statfs(0x4f86da000/0x0/0x4ffc00000, data 0x1269549/0x146e000, compress 0x0/0x0/0x0, omap 0x6cd09, meta 0x60432f7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153632768 unmapped: 49971200 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:51.246105+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544400 session 0x55a1f4309500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153624576 unmapped: 49979392 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:52.246287+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 heartbeat osd_stat(store_statfs(0x4f86df000/0x0/0x4ffc00000, data 0x12694e7/0x146d000, compress 0x0/0x0/0x0, omap 0x6d115, meta 0x6042eeb), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4218400 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2334094 data_alloc: 218103808 data_used: 4793269
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153624576 unmapped: 49979392 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:53.246519+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f1843000 session 0x55a1f196ddc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153624576 unmapped: 49979392 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:54.246724+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4218400 session 0x55a1f41cc1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f43a76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153624576 unmapped: 49979392 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:55.246881+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544400 session 0x55a1f24fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4544c00 session 0x55a1f41cd180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f4218800 session 0x55a1f39d2fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f1843000 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 49954816 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:56.247347+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153649152 unmapped: 49954816 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:57.247626+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.243700027s of 11.075099945s, submitted: 128
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 366 handle_osd_map epochs [367,367], i have 366, src has [1,367]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f4218400 session 0x55a1f27356c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2340828 data_alloc: 218103808 data_used: 4793367
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f86de000/0x0/0x4ffc00000, data 0x1269539/0x146d000, compress 0x0/0x0/0x0, omap 0x6d977, meta 0x6042689), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 49946624 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:58.247856+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153657344 unmapped: 49946624 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:19:59.248145+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f4643c00 session 0x55a1f43a6700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f4544400 session 0x55a1f3b03c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153788416 unmapped: 49815552 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:00.248359+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153788416 unmapped: 49815552 heap: 203603968 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:01.248633+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 179003392 unmapped: 41402368 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:02.248909+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2544754 data_alloc: 218103808 data_used: 4793481
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153837568 unmapped: 66568192 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f62da000/0x0/0x4ffc00000, data 0x366b147/0x3872000, compress 0x0/0x0/0x0, omap 0x6de06, meta 0x60421fa), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:03.249129+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158064640 unmapped: 62341120 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:04.249266+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158097408 unmapped: 62308352 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:05.249387+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:06.249638+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153911296 unmapped: 66494464 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:07.249770+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158113792 unmapped: 62291968 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2823630 data_alloc: 218103808 data_used: 4793481
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.264167786s of 10.513276100s, submitted: 47
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:08.250250+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 153919488 unmapped: 66486272 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 heartbeat osd_stat(store_statfs(0x4f1eda000/0x0/0x4ffc00000, data 0x7a6b147/0x7c72000, compress 0x0/0x0/0x0, omap 0x6de06, meta 0x60421fa), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,4])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:09.250411+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162357248 unmapped: 58048512 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:10.250614+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158171136 unmapped: 62234624 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:11.250903+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 175030272 unmapped: 45375488 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:12.251126+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 154140672 unmapped: 66265088 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 heartbeat osd_stat(store_statfs(0x4eb6da000/0x0/0x4ffc00000, data 0xe26b147/0xe472000, compress 0x0/0x0/0x0, omap 0x6de06, meta 0x60421fa), peers [1,2] op hist [0,0,0,0,0,1,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3609546 data_alloc: 218103808 data_used: 4793481
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:13.252190+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 167002112 unmapped: 53403648 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f4218400 session 0x55a1f24fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:14.252390+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158818304 unmapped: 61587456 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:15.252551+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 163069952 unmapped: 57335808 heap: 220405760 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:16.252717+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160227328 unmapped: 64380928 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f4218800 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 heartbeat osd_stat(store_statfs(0x4e32d9000/0x0/0x4ffc00000, data 0x1666b1a9/0x16873000, compress 0x0/0x0/0x0, omap 0x6de06, meta 0x60421fa), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4266c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:17.252870+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 ms_handle_reset con 0x55a1f1843000 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156450816 unmapped: 68157440 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 367 handle_osd_map epochs [368,368], i have 367, src has [1,368]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 368 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f42216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 368 ms_handle_reset con 0x55a1f4544400 session 0x55a1f45f3a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 368 ms_handle_reset con 0x55a1f4218400 session 0x55a1f4309c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4368260 data_alloc: 218103808 data_used: 4793595
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:18.253226+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156557312 unmapped: 68050944 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:19.253369+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156557312 unmapped: 68050944 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 368 heartbeat osd_stat(store_statfs(0x4e0ed3000/0x0/0x4ffc00000, data 0x18a6cda7/0x18c77000, compress 0x0/0x0/0x0, omap 0x6e12f, meta 0x6041ed1), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 5.815574169s of 11.951597214s, submitted: 103
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 368 handle_osd_map epochs [369,369], i have 368, src has [1,369]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:20.253527+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 ms_handle_reset con 0x55a1f4218800 session 0x55a1f24fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4642400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156729344 unmapped: 67878912 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 ms_handle_reset con 0x55a1f4642400 session 0x55a1f4267c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 ms_handle_reset con 0x55a1f4643000 session 0x55a1f4320fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 ms_handle_reset con 0x55a1f4643800 session 0x55a1f3bc9500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:21.253657+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156893184 unmapped: 67715072 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f2734fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:22.253815+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 369 handle_osd_map epochs [370,370], i have 369, src has [1,370]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 ms_handle_reset con 0x55a1f4218400 session 0x55a1f41cd500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 156917760 unmapped: 67690496 heap: 224608256 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 ms_handle_reset con 0x55a1f4544400 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 ms_handle_reset con 0x55a1f4218800 session 0x55a1f4220fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2723214 data_alloc: 218103808 data_used: 4793399
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:23.254021+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212623360 unmapped: 49782784 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 heartbeat osd_stat(store_statfs(0x4f3acd000/0x0/0x4ffc00000, data 0x46709c2/0x487d000, compress 0x0/0x0/0x0, omap 0x6e487, meta 0x6041b79), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4,6])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:24.254227+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162299904 unmapped: 100106240 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:25.254840+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 ms_handle_reset con 0x55a1f4544400 session 0x55a1f24fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162340864 unmapped: 100065280 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:26.255361+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 170754048 unmapped: 91652096 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 ms_handle_reset con 0x55a1f4643000 session 0x55a1f41d16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:27.255813+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 158228480 unmapped: 104177664 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 370 handle_osd_map epochs [371,371], i have 370, src has [1,371]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 heartbeat osd_stat(store_statfs(0x4ed6ce000/0x0/0x4ffc00000, data 0xc2709d2/0xc47e000, compress 0x0/0x0/0x0, omap 0x6e487, meta 0x6041b79), peers [1,2] op hist [0,0,0,0,0,0,2,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3617200 data_alloc: 218103808 data_used: 4793399
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:28.255988+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 160473088 unmapped: 101933056 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4643800 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4642c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:29.256204+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169091072 unmapped: 93315072 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 heartbeat osd_stat(store_statfs(0x4e6ecb000/0x0/0x4ffc00000, data 0x12a72451/0x12c81000, compress 0x0/0x0/0x0, omap 0x6ea10, meta 0x60415f0), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4642c00 session 0x55a1f4308540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 3.494144678s of 10.051168442s, submitted: 458
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:30.256547+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 164962304 unmapped: 97443840 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4218800 session 0x55a1f26048c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:31.256859+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 169312256 unmapped: 93093888 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4544400 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f3bce000 session 0x55a1efdde380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4642c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4642c00 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 ms_handle_reset con 0x55a1f4218400 session 0x55a1f4231c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:32.257199+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 101359616 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 371 handle_osd_map epochs [371,372], i have 371, src has [1,372]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4543691 data_alloc: 218103808 data_used: 4799048
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 heartbeat osd_stat(store_statfs(0x4e05bb000/0x0/0x4ffc00000, data 0x19382451/0x19591000, compress 0x0/0x0/0x0, omap 0x6ee0e, meta 0x60411f2), peers [1,2] op hist [0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f45f2000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:33.257425+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 ms_handle_reset con 0x55a1f4218800 session 0x55a1f4267340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 101359616 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 ms_handle_reset con 0x55a1f4544400 session 0x55a1f41ccc40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:34.257866+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 161046528 unmapped: 101359616 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4642c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:35.258112+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 100294656 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 ms_handle_reset con 0x55a1f4643000 session 0x55a1f41cc700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:36.258426+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162111488 unmapped: 100294656 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 heartbeat osd_stat(store_statfs(0x4e05b7000/0x0/0x4ffc00000, data 0x19383c15/0x19595000, compress 0x0/0x0/0x0, omap 0x6f4f8, meta 0x6040b08), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 ms_handle_reset con 0x55a1f4642c00 session 0x55a1f24fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 372 handle_osd_map epochs [373,373], i have 372, src has [1,373]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 373 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4321500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:37.258615+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 373 handle_osd_map epochs [374,374], i have 373, src has [1,374]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 ms_handle_reset con 0x55a1f4218800 session 0x55a1f271e380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 100229120 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 ms_handle_reset con 0x55a1f4643800 session 0x55a1f4266700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4556529 data_alloc: 218103808 data_used: 4801961
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:38.258847+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162177024 unmapped: 100229120 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:39.259150+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162185216 unmapped: 100220928 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:40.259341+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162185216 unmapped: 100220928 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.870332241s of 10.428020477s, submitted: 133
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 heartbeat osd_stat(store_statfs(0x4e05ad000/0x0/0x4ffc00000, data 0x193873a1/0x1959b000, compress 0x0/0x0/0x0, omap 0x6fc37, meta 0x60403c9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 ms_handle_reset con 0x55a1f4643000 session 0x55a1f4012c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd6800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 ms_handle_reset con 0x55a1f3fd6800 session 0x55a1f2604c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:41.259477+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162881536 unmapped: 99524608 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 374 handle_osd_map epochs [374,375], i have 374, src has [1,375]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 375 ms_handle_reset con 0x55a1f4544400 session 0x55a1f24ffc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 375 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:42.259625+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 99500032 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4610622 data_alloc: 218103808 data_used: 4797881
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:43.259810+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 162906112 unmapped: 99500032 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 375 handle_osd_map epochs [375,376], i have 375, src has [1,376]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 ms_handle_reset con 0x55a1f4643000 session 0x55a1f2605500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 ms_handle_reset con 0x55a1f453e800 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 ms_handle_reset con 0x55a1f228a000 session 0x55a1f267efc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 heartbeat osd_stat(store_statfs(0x4dfed5000/0x0/0x4ffc00000, data 0x19a5d012/0x19c75000, compress 0x0/0x0/0x0, omap 0x70504, meta 0x603fafc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:44.259958+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f267f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 163225600 unmapped: 99180544 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 handle_osd_map epochs [377,377], i have 376, src has [1,377]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 376 handle_osd_map epochs [377,377], i have 377, src has [1,377]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4643800 session 0x55a1f39d3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4dfecf000/0x0/0x4ffc00000, data 0x19a5ec10/0x19c79000, compress 0x0/0x0/0x0, omap 0x7061b, meta 0x603f9e5), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4218800 session 0x55a1f2734a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f453e800 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4544400 session 0x55a1f4309500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:45.260089+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 163315712 unmapped: 99090432 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:46.260211+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f453e800 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4218800 session 0x55a1f2734540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184442880 unmapped: 77963264 heap: 262406144 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4dfecd000/0x0/0x4ffc00000, data 0x19a607cf/0x19c7d000, compress 0x0/0x0/0x0, omap 0x70ee9, meta 0x603f117), peers [1,2] op hist [0,0,0,0,0,1,0,3,1,8])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:47.260393+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168108032 unmapped: 98500608 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5210874 data_alloc: 234881024 data_used: 11036503
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:48.260517+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 165789696 unmapped: 100818944 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4d8ecf000/0x0/0x4ffc00000, data 0x20a607cf/0x20c7d000, compress 0x0/0x0/0x0, omap 0x70ee9, meta 0x603f117), peers [1,2] op hist [0,0,0,0,1,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:49.260700+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 174415872 unmapped: 92192768 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4d6acf000/0x0/0x4ffc00000, data 0x22e607cf/0x2307d000, compress 0x0/0x0/0x0, omap 0x70ee9, meta 0x603f117), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4d6acf000/0x0/0x4ffc00000, data 0x22e607cf/0x2307d000, compress 0x0/0x0/0x0, omap 0x70ee9, meta 0x603f117), peers [1,2] op hist [0,0,0,0,0,0,3,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:50.260823+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 170541056 unmapped: 96067584 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.111308575s of 10.004493713s, submitted: 243
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:51.260980+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 170778624 unmapped: 95830016 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:52.261145+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 166805504 unmapped: 99803136 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 5933073 data_alloc: 234881024 data_used: 11760617
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f228a800 session 0x55a1f41e16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f2735a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f41d0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:53.261307+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171802624 unmapped: 94806016 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f4309c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f228a800 session 0x55a1f4012700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:54.345968+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4cff5f000/0x0/0x4ffc00000, data 0x299d07cf/0x29bed000, compress 0x0/0x0/0x0, omap 0x70e70, meta 0x603f190), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 171868160 unmapped: 94740480 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:55.346136+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 168984576 unmapped: 97624064 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4393dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:56.346253+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4218800 session 0x55a1f271ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 186163200 unmapped: 80445440 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:57.346372+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f453e800 session 0x55a1f43a76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 178110464 unmapped: 88498176 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6765387 data_alloc: 234881024 data_used: 11760617
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:58.346548+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 178110464 unmapped: 88498176 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 heartbeat osd_stat(store_statfs(0x4c7b5e000/0x0/0x4ffc00000, data 0x31dd07df/0x31fee000, compress 0x0/0x0/0x0, omap 0x70f04, meta 0x603f0fc), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,5])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:20:59.346698+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 173932544 unmapped: 92676096 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4643800 session 0x55a1f39d28c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f4643000 session 0x55a1f4230e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f267f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:00.346820+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 1.196105003s of 10.155838013s, submitted: 288
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 177766400 unmapped: 88842240 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:01.346960+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 handle_osd_map epochs [378,378], i have 377, src has [1,378]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 377 handle_osd_map epochs [377,378], i have 378, src has [1,378]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 178053120 unmapped: 88555520 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 378 ms_handle_reset con 0x55a1f228a800 session 0x55a1f24ffdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:02.347098+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180887552 unmapped: 85721088 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 378 heartbeat osd_stat(store_statfs(0x4c69fb000/0x0/0x4ffc00000, data 0x32f303cf/0x3314f000, compress 0x0/0x0/0x0, omap 0x715d5, meta 0x603ea2b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6900250 data_alloc: 234881024 data_used: 19309545
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:03.347310+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 378 handle_osd_map epochs [379,379], i have 378, src has [1,379]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180912128 unmapped: 85696512 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 379 ms_handle_reset con 0x55a1f453e800 session 0x55a1f24fe700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:04.347548+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180920320 unmapped: 85688320 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:05.348259+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180920320 unmapped: 85688320 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 379 heartbeat osd_stat(store_statfs(0x4c69fb000/0x0/0x4ffc00000, data 0x32f31faf/0x33151000, compress 0x0/0x0/0x0, omap 0x71c69, meta 0x603e397), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 379 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f43936c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:06.348374+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180928512 unmapped: 85680128 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 379 ms_handle_reset con 0x55a1f228a800 session 0x55a1f42216c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:07.348506+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 379 handle_osd_map epochs [380,380], i have 379, src has [1,380]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180740096 unmapped: 85868544 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 6905025 data_alloc: 234881024 data_used: 19314238
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:08.348642+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 180748288 unmapped: 85860352 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:09.348786+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 380 ms_handle_reset con 0x55a1f4643000 session 0x55a1f422e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 380 ms_handle_reset con 0x55a1f4643800 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181526528 unmapped: 85082112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:10.348953+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181526528 unmapped: 85082112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:11.349119+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 380 heartbeat osd_stat(store_statfs(0x4c5bb1000/0x0/0x4ffc00000, data 0x33d77a4a/0x33f98000, compress 0x0/0x0/0x0, omap 0x71eec, meta 0x603e114), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.576258659s of 10.924465179s, submitted: 146
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 380 ms_handle_reset con 0x55a1f4733c00 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181534720 unmapped: 85073920 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 380 handle_osd_map epochs [381,381], i have 380, src has [1,381]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 381 ms_handle_reset con 0x55a1f228a800 session 0x55a1f19c7180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:12.349246+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 381 handle_osd_map epochs [382,382], i have 381, src has [1,382]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f45f36c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 84828160 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7053438 data_alloc: 234881024 data_used: 19314238
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:13.349417+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181780480 unmapped: 84828160 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:14.349590+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 heartbeat osd_stat(store_statfs(0x4c5457000/0x0/0x4ffc00000, data 0x344d11e6/0x346f5000, compress 0x0/0x0/0x0, omap 0x7298d, meta 0x603d673), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 181665792 unmapped: 84942848 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:15.349752+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 186195968 unmapped: 80412672 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 heartbeat osd_stat(store_statfs(0x4c53b0000/0x0/0x4ffc00000, data 0x345781e6/0x3479c000, compress 0x0/0x0/0x0, omap 0x7298d, meta 0x603d673), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:16.349852+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 186966016 unmapped: 79642624 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:17.350089+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7093794 data_alloc: 234881024 data_used: 19347006
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:18.350210+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:19.350497+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 heartbeat osd_stat(store_statfs(0x4c4e5b000/0x0/0x4ffc00000, data 0x34acc1e6/0x34cf0000, compress 0x0/0x0/0x0, omap 0x7298d, meta 0x603d673), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:20.350687+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:21.350879+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:22.351020+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.169129372s of 11.099694252s, submitted: 100
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7093030 data_alloc: 234881024 data_used: 19355198
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:23.351221+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 187023360 unmapped: 79585280 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:24.351403+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 heartbeat osd_stat(store_statfs(0x4c4e56000/0x0/0x4ffc00000, data 0x34ad21e6/0x34cf6000, compress 0x0/0x0/0x0, omap 0x72a21, meta 0x603d5df), peers [1,2] op hist [0,0,0,0,2,11])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 ms_handle_reset con 0x55a1f4643000 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185704448 unmapped: 80904192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f2605c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:25.351577+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185704448 unmapped: 80904192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:26.351725+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4548400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185704448 unmapped: 80904192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 382 handle_osd_map epochs [383,383], i have 382, src has [1,383]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 ms_handle_reset con 0x55a1f4548400 session 0x55a1f4013880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:27.351847+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 80814080 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 heartbeat osd_stat(store_statfs(0x4c44cb000/0x0/0x4ffc00000, data 0x35458de4/0x3567f000, compress 0x0/0x0/0x0, omap 0x72c11, meta 0x603d3ef), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7159605 data_alloc: 234881024 data_used: 19355198
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:28.352558+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 ms_handle_reset con 0x55a1f4643000 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 heartbeat osd_stat(store_statfs(0x4c44cd000/0x0/0x4ffc00000, data 0x35458de4/0x3567f000, compress 0x0/0x0/0x0, omap 0x72c11, meta 0x603d3ef), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185794560 unmapped: 80814080 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:29.352724+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185810944 unmapped: 80797696 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:30.352968+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 383 handle_osd_map epochs [383,384], i have 383, src has [1,384]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185827328 unmapped: 80781312 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 384 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f186ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 384 ms_handle_reset con 0x55a1f228a800 session 0x55a1f2605500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:31.353099+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185843712 unmapped: 80764928 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 384 handle_osd_map epochs [385,385], i have 384, src has [1,385]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f2734540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:32.353277+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 186482688 unmapped: 80125952 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.956271648s of 10.188171387s, submitted: 71
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7190412 data_alloc: 234881024 data_used: 22805070
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:33.353536+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 heartbeat osd_stat(store_statfs(0x4c44be000/0x0/0x4ffc00000, data 0x3546154f/0x3568c000, compress 0x0/0x0/0x0, omap 0x73ae5, meta 0x603c51b), peers [1,2] op hist [1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 ms_handle_reset con 0x55a1f4738000 session 0x55a1f43a6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71688192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:34.353659+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71688192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:35.353817+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194920448 unmapped: 71688192 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:36.353935+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 heartbeat osd_stat(store_statfs(0x4c44be000/0x0/0x4ffc00000, data 0x3546154f/0x3568c000, compress 0x0/0x0/0x0, omap 0x73c57, meta 0x603c3a9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194953216 unmapped: 71655424 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:37.354091+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 handle_osd_map epochs [386,386], i have 385, src has [1,386]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 385 handle_osd_map epochs [385,386], i have 386, src has [1,386]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195002368 unmapped: 71606272 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7229965 data_alloc: 251658240 data_used: 28958286
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:38.354223+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 386 ms_handle_reset con 0x55a1f3fe6c00 session 0x55a1f26fe540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195018752 unmapped: 71589888 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:39.354343+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195026944 unmapped: 71581696 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 386 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f24fe000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe7400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:40.354503+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 386 handle_osd_map epochs [387,387], i have 386, src has [1,387]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 387 ms_handle_reset con 0x55a1f3fe7400 session 0x55a1f43a68c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195043328 unmapped: 71565312 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 387 heartbeat osd_stat(store_statfs(0x4c44bd000/0x0/0x4ffc00000, data 0x3546313f/0x3568f000, compress 0x0/0x0/0x0, omap 0x73fbd, meta 0x603c043), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:41.354651+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 387 heartbeat osd_stat(store_statfs(0x4c44b8000/0x0/0x4ffc00000, data 0x35464d3f/0x35692000, compress 0x0/0x0/0x0, omap 0x745c5, meta 0x603ba3b), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 387 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f3b02000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195084288 unmapped: 71524352 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:42.354775+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195084288 unmapped: 71524352 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:43.354912+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7232183 data_alloc: 251658240 data_used: 28963011
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 387 handle_osd_map epochs [388,388], i have 387, src has [1,388]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.200649261s of 10.459271431s, submitted: 55
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195141632 unmapped: 71467008 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:44.355110+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f4221dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 196001792 unmapped: 70606848 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 heartbeat osd_stat(store_statfs(0x4c44b5000/0x0/0x4ffc00000, data 0x35466967/0x35695000, compress 0x0/0x0/0x0, omap 0x74ad7, meta 0x603b529), peers [1,2] op hist [0,0,0,0,0,0,0,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:45.355257+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 198123520 unmapped: 68485120 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:46.355405+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 202481664 unmapped: 64126976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:47.355559+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 202571776 unmapped: 64036864 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:48.355681+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7360980 data_alloc: 251658240 data_used: 29032131
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 ms_handle_reset con 0x55a1f4643800 session 0x55a1f271fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 ms_handle_reset con 0x55a1f3fe6c00 session 0x55a1f20ada40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 handle_osd_map epochs [389,389], i have 388, src has [1,389]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 388 heartbeat osd_stat(store_statfs(0x4c28eb000/0x0/0x4ffc00000, data 0x363b9967/0x360c0000, compress 0x0/0x0/0x0, omap 0x755a9, meta 0x71daa57), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 200630272 unmapped: 65978368 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:49.355798+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f228a400 session 0x55a1f4309180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4309dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 201113600 unmapped: 65495040 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f3bc9500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f4218800 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:50.355915+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f45f36c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 ms_handle_reset con 0x55a1f0f7b400 session 0x55a1f4267340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192913408 unmapped: 73695232 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:51.356199+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 72564736 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:52.356387+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 194043904 unmapped: 72564736 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:53.356570+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7126395 data_alloc: 234881024 data_used: 15783107
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 heartbeat osd_stat(store_statfs(0x4c30b6000/0x0/0x4ffc00000, data 0x3480936d/0x343fe000, compress 0x0/0x0/0x0, omap 0x75e3d, meta 0x71da1c3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 handle_osd_map epochs [390,390], i have 389, src has [1,390]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 389 handle_osd_map epochs [390,390], i have 390, src has [1,390]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.335848808s of 10.419628143s, submitted: 247
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4012fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 193814528 unmapped: 72794112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:54.356725+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 193822720 unmapped: 72785920 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:55.356912+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 heartbeat osd_stat(store_statfs(0x4c45aa000/0x0/0x4ffc00000, data 0x33efaddc/0x33af0000, compress 0x0/0x0/0x0, omap 0x760c4, meta 0x71d9f3c), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 193822720 unmapped: 72785920 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:56.357026+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f2605c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192790528 unmapped: 73818112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:57.357177+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 heartbeat osd_stat(store_statfs(0x4c4e99000/0x0/0x4ffc00000, data 0x33f1cddc/0x33b12000, compress 0x0/0x0/0x0, omap 0x763a8, meta 0x71d9c58), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192790528 unmapped: 73818112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:58.357294+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7074849 data_alloc: 234881024 data_used: 15791150
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 ms_handle_reset con 0x55a1f4218800 session 0x55a1f41ccc40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 390 handle_osd_map epochs [391,391], i have 390, src has [1,391]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192790528 unmapped: 73818112 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:21:59.357436+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f196da40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 ms_handle_reset con 0x55a1f3fe6c00 session 0x55a1f41cc700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192806912 unmapped: 73801728 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 heartbeat osd_stat(store_statfs(0x4c4e96000/0x0/0x4ffc00000, data 0x33f1e968/0x33b14000, compress 0x0/0x0/0x0, omap 0x769bd, meta 0x71d9643), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:00.357586+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f45f21c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4231c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f24fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 391 handle_osd_map epochs [392,392], i have 391, src has [1,392]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 392 ms_handle_reset con 0x55a1f4218800 session 0x55a1f3e30c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192839680 unmapped: 73768960 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:01.357734+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4643800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 392 heartbeat osd_stat(store_statfs(0x4c4e96000/0x0/0x4ffc00000, data 0x33f204f5/0x33b16000, compress 0x0/0x0/0x0, omap 0x76e7c, meta 0x71d9184), peers [1,2] op hist [1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 392 handle_osd_map epochs [393,393], i have 392, src has [1,393]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 ms_handle_reset con 0x55a1f4643800 session 0x55a1f18aee00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:02.357916+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:03.358120+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7081699 data_alloc: 234881024 data_used: 15791846
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:04.358247+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:05.358435+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 heartbeat osd_stat(store_statfs(0x4c4e91000/0x0/0x4ffc00000, data 0x33f220ad/0x33b19000, compress 0x0/0x0/0x0, omap 0x773b9, meta 0x71d8c47), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 heartbeat osd_stat(store_statfs(0x4c4e91000/0x0/0x4ffc00000, data 0x33f220ad/0x33b19000, compress 0x0/0x0/0x0, omap 0x773b9, meta 0x71d8c47), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:06.358608+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 heartbeat osd_stat(store_statfs(0x4c4e91000/0x0/0x4ffc00000, data 0x33f220ad/0x33b19000, compress 0x0/0x0/0x0, omap 0x773b9, meta 0x71d8c47), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192585728 unmapped: 74022912 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:07.358771+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196d6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192593920 unmapped: 74014720 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.748801231s of 14.268507004s, submitted: 106
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:08.358866+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7081699 data_alloc: 234881024 data_used: 15791846
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 393 handle_osd_map epochs [394,394], i have 393, src has [1,394]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192643072 unmapped: 73965568 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f43a76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:09.359194+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192643072 unmapped: 73965568 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:10.359377+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 heartbeat osd_stat(store_statfs(0x4c46f1000/0x0/0x4ffc00000, data 0x346c0b2c/0x342b9000, compress 0x0/0x0/0x0, omap 0x775fb, meta 0x71d8a05), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 192643072 unmapped: 73965568 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f3e30540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f43081c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f4643000 session 0x55a1f26fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:11.359549+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 81412096 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:12.359679+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 heartbeat osd_stat(store_statfs(0x4c5079000/0x0/0x4ffc00000, data 0x33d3ba97/0x33931000, compress 0x0/0x0/0x0, omap 0x7772b, meta 0x71d88d5), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 81412096 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:13.359908+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7018310 data_alloc: 218103808 data_used: 6185174
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 81412096 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:14.360091+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 heartbeat osd_stat(store_statfs(0x4c5079000/0x0/0x4ffc00000, data 0x33d3ba97/0x33931000, compress 0x0/0x0/0x0, omap 0x7772b, meta 0x71d88d5), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 185196544 unmapped: 81412096 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f45f36c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:15.360231+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f4012fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4218800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f4218800 session 0x55a1f41ccc40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 81903616 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:16.360355+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184705024 unmapped: 81903616 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:17.360475+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 82558976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:18.360646+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.830248833s of 10.250608444s, submitted: 62
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7071223 data_alloc: 234881024 data_used: 13961809
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 82558976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f186ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:19.360830+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 heartbeat osd_stat(store_statfs(0x4c503a000/0x0/0x4ffc00000, data 0x33d7baa7/0x33972000, compress 0x0/0x0/0x0, omap 0x77a6f, meta 0x71d8591), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f2604380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 82558976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:20.360956+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 184049664 unmapped: 82558976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 394 handle_osd_map epochs [395,395], i have 394, src has [1,395]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:21.361104+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3faa400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 395 ms_handle_reset con 0x55a1f3faa400 session 0x55a1f24ff180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189276160 unmapped: 77332480 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:22.361257+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189276160 unmapped: 77332480 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:23.361446+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7182186 data_alloc: 234881024 data_used: 13961825
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 395 handle_osd_map epochs [396,396], i have 395, src has [1,396]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 77447168 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:24.361564+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 heartbeat osd_stat(store_statfs(0x4c459a000/0x0/0x4ffc00000, data 0x34b472a3/0x34410000, compress 0x0/0x0/0x0, omap 0x785cc, meta 0x71d7a34), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 77447168 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:25.361721+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 77447168 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:26.361936+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 77447168 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:27.362153+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189161472 unmapped: 77447168 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:28.362323+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7186737 data_alloc: 234881024 data_used: 13961825
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 heartbeat osd_stat(store_statfs(0x4c459a000/0x0/0x4ffc00000, data 0x34b472a3/0x34410000, compress 0x0/0x0/0x0, omap 0x785cc, meta 0x71d7a34), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,0,0,1,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 77438976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:29.362472+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency_fn slow operation observed for _txc_committed_kv, latency = 5.175546169s, txc = 0x55a1f8d8a300, txc bytes = 1334, txc ios = 1, txc cost = 671334, txc onodes = 1, DB updates = 3, DB bytes = 1093, cost max = 113664540 on 2025-12-13T04:19:14.468832+0000, txc max = 100 on 2025-12-13T03:44:45.082459+0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_commit, latency = 5.175583839s
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) log_latency slow operation observed for kv_sync, latency = 5.175583839s
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.482435226s of 11.570550919s, submitted: 84
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189169664 unmapped: 77438976 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 heartbeat osd_stat(store_statfs(0x4c459a000/0x0/0x4ffc00000, data 0x34b472a3/0x34410000, compress 0x0/0x0/0x0, omap 0x785cc, meta 0x71d7a34), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:30.362678+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 heartbeat osd_stat(store_statfs(0x4c459a000/0x0/0x4ffc00000, data 0x34b472a3/0x34410000, compress 0x0/0x0/0x0, omap 0x785cc, meta 0x71d7a34), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 ms_handle_reset con 0x55a1f4738000 session 0x55a1f45f2000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 188801024 unmapped: 77807616 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 heartbeat osd_stat(store_statfs(0x4c459a000/0x0/0x4ffc00000, data 0x34b472a3/0x34410000, compress 0x0/0x0/0x0, omap 0x785cc, meta 0x71d7a34), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,9])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:31.362842+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f4013180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f43a7dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189767680 unmapped: 76840960 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:32.362968+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 396 handle_osd_map epochs [397,397], i have 396, src has [1,397]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f24ff880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f43096c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 189833216 unmapped: 76775424 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:33.363158+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7238231 data_alloc: 234881024 data_used: 14174817
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 heartbeat osd_stat(store_statfs(0x4c3da7000/0x0/0x4ffc00000, data 0x35335e31/0x34bff000, compress 0x0/0x0/0x0, omap 0x78d66, meta 0x71d729a), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 190660608 unmapped: 75948032 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:34.363277+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f45f2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3faa400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 190660608 unmapped: 75948032 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:35.363417+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3faa400 session 0x55a1f4231180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 199221248 unmapped: 67387392 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:36.363538+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 195108864 unmapped: 71499776 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:37.363692+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 190914560 unmapped: 75694080 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:38.363839+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7485838 data_alloc: 234881024 data_used: 14183009
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 heartbeat osd_stat(store_statfs(0x4c0fa5000/0x0/0x4ffc00000, data 0x3813ce93/0x37a07000, compress 0x0/0x0/0x0, omap 0x788b9, meta 0x71d7747), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f2735880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 190922752 unmapped: 75685888 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:39.363997+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 397 handle_osd_map epochs [398,398], i have 397, src has [1,398]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 4.453665257s of 10.006062508s, submitted: 221
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 398 handle_osd_map epochs [399,399], i have 398, src has [1,399]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 190955520 unmapped: 75653120 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:40.382882+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 399 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191004672 unmapped: 75603968 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:41.383379+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191004672 unmapped: 75603968 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:42.383526+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 399 heartbeat osd_stat(store_statfs(0x4c2161000/0x0/0x4ffc00000, data 0x36c4d50e/0x3684b000, compress 0x0/0x0/0x0, omap 0x79719, meta 0x71d68e7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 399 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191004672 unmapped: 75603968 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:43.383721+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7356917 data_alloc: 234881024 data_used: 14179169
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191004672 unmapped: 75603968 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:44.383872+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 399 heartbeat osd_stat(store_statfs(0x4c2160000/0x0/0x4ffc00000, data 0x36c4d570/0x3684c000, compress 0x0/0x0/0x0, omap 0x79719, meta 0x71d68e7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 399 handle_osd_map epochs [399,400], i have 399, src has [1,400]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 400 ms_handle_reset con 0x55a1f4738000 session 0x55a1f42208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191012864 unmapped: 75595776 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:45.384009+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191012864 unmapped: 75595776 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:46.384166+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191012864 unmapped: 75595776 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:47.384309+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 400 heartbeat osd_stat(store_statfs(0x4c2151000/0x0/0x4ffc00000, data 0x36c59128/0x36859000, compress 0x0/0x0/0x0, omap 0x79d87, meta 0x71d6279), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191029248 unmapped: 75579392 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:48.384481+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 400 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f43936c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7361336 data_alloc: 234881024 data_used: 14179267
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 191037440 unmapped: 75571200 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:49.384657+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa8000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.143125534s of 10.008992195s, submitted: 67
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 400 handle_osd_map epochs [401,401], i have 400, src has [1,401]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 400 handle_osd_map epochs [400,401], i have 401, src has [1,401]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241467392 unmapped: 25141248 heap: 266608640 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:50.384813+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249921536 unmapped: 20889600 heap: 270811136 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:51.384972+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 401 heartbeat osd_stat(store_statfs(0x4bf54f000/0x0/0x4ffc00000, data 0x3985ac09/0x3945d000, compress 0x0/0x0/0x0, omap 0x79e0d, meta 0x71d61f3), peers [1,2] op hist [0,0,0,0,0,0,0,0,1,1,2,3,12])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 258441216 unmapped: 20774912 heap: 279216128 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:52.385150+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 401 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f3bc8700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237543424 unmapped: 45875200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:53.385337+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7799607 data_alloc: 234881024 data_used: 14179267
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:54.385524+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 205135872 unmapped: 78282752 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 401 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f196c8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:55.385702+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 205209600 unmapped: 78209024 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 401 ms_handle_reset con 0x55a1f3fdb000 session 0x55a1f41e0700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:56.385929+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 197025792 unmapped: 86392832 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 401 handle_osd_map epochs [402,402], i have 401, src has [1,402]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:57.386075+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 197173248 unmapped: 86245376 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 402 heartbeat osd_stat(store_statfs(0x4ba945000/0x0/0x4ffc00000, data 0x3e461817/0x3e067000, compress 0x0/0x0/0x0, omap 0x7a4f3, meta 0x71d5b0d), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,2,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:58.386182+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 197271552 unmapped: 86147072 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 402 ms_handle_reset con 0x55a1f3fe5c00 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 402 ms_handle_reset con 0x55a1f473c800 session 0x55a1f43921c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8021180 data_alloc: 234881024 data_used: 14179852
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:22:59.386326+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 193642496 unmapped: 89776128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 402 handle_osd_map epochs [403,403], i have 402, src has [1,403]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 0.302483022s of 10.213685989s, submitted: 182
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 ms_handle_reset con 0x55a1f3fe5c00 session 0x55a1f4309a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:00.386450+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 199294976 unmapped: 84123648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:01.386718+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 197558272 unmapped: 85860352 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4220fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:02.386882+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 201826304 unmapped: 81592320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 handle_osd_map epochs [404,404], i have 403, src has [1,404]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:03.387074+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 heartbeat osd_stat(store_statfs(0x4b0f7e000/0x0/0x4ffc00000, data 0x46c873b3/0x4688e000, compress 0x0/0x0/0x0, omap 0x7a603, meta 0x83759fd), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,0,4])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 8854352 data_alloc: 251658240 data_used: 27667027
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:04.387177+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 handle_osd_map epochs [404,404], i have 404, src has [1,404]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 403 handle_osd_map epochs [404,404], i have 404, src has [1,404]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215900160 unmapped: 67518464 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 404 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f41e0fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:05.387381+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212049920 unmapped: 71368704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 404 ms_handle_reset con 0x55a1f4731400 session 0x55a1f41cc700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 404 handle_osd_map epochs [405,405], i have 404, src has [1,405]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:06.387525+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217792512 unmapped: 65626112 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 405 ms_handle_reset con 0x55a1f3fe5c00 session 0x55a1f41e08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 405 heartbeat osd_stat(store_statfs(0x4acf72000/0x0/0x4ffc00000, data 0x4ac8bba3/0x4a895000, compress 0x0/0x0/0x0, omap 0x7ac84, meta 0x837537c), peers [1,2] op hist [0,0,0,0,0,0,2])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:07.387646+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218218496 unmapped: 65200128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 405 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:08.387818+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 405 handle_osd_map epochs [406,406], i have 405, src has [1,406]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 219570176 unmapped: 63848448 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f4731400 session 0x55a1f45f3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 9824546 data_alloc: 251658240 data_used: 27667027
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:09.469682+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f40136c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212443136 unmapped: 70975488 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f3fa8000 session 0x55a1f45f2380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 heartbeat osd_stat(store_statfs(0x4a5773000/0x0/0x4ffc00000, data 0x5248d741/0x52097000, compress 0x0/0x0/0x0, omap 0x7ad95, meta 0x837526b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f3e31500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f3fe5c00 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:10.469848+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 2.350841284s of 10.310097694s, submitted: 309
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212508672 unmapped: 70909952 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 heartbeat osd_stat(store_statfs(0x4a4f75000/0x0/0x4ffc00000, data 0x52c8d741/0x52897000, compress 0x0/0x0/0x0, omap 0x7b19a, meta 0x8374e66), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:11.470261+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f4738000 session 0x55a1f4221a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212615168 unmapped: 70803456 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f2735180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196d6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f3e30540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:12.470395+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 207437824 unmapped: 75980800 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4221500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:13.470535+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 207437824 unmapped: 75980800 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 406 handle_osd_map epochs [407,407], i have 406, src has [1,407]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7463106 data_alloc: 234881024 data_used: 19685459
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 407 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f4309a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:14.470671+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 219799552 unmapped: 63619072 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 407 handle_osd_map epochs [407,408], i have 407, src has [1,408]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fe5c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 408 ms_handle_reset con 0x55a1f3fe5c00 session 0x55a1f196ddc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:15.470801+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 224706560 unmapped: 58712064 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 408 heartbeat osd_stat(store_statfs(0x4c1bb6000/0x0/0x4ffc00000, data 0x35cd9d28/0x358e3000, compress 0x0/0x0/0x0, omap 0x7c333, meta 0x8373ccd), peers [1,2] op hist [0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 408 heartbeat osd_stat(store_statfs(0x4c1bb6000/0x0/0x4ffc00000, data 0x35cd9d28/0x358e3000, compress 0x0/0x0/0x0, omap 0x7c333, meta 0x8373ccd), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:16.470913+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 220151808 unmapped: 63266816 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:17.471123+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 408 handle_osd_map epochs [409,409], i have 408, src has [1,409]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 408 handle_osd_map epochs [408,409], i have 409, src has [1,409]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 220176384 unmapped: 63242240 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 409 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:18.471257+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 409 handle_osd_map epochs [409,410], i have 409, src has [1,410]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 410 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f422ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 220192768 unmapped: 63225856 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 7391728 data_alloc: 234881024 data_used: 20215281
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 410 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f4266a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:19.471381+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 410 heartbeat osd_stat(store_statfs(0x4c16f5000/0x0/0x4ffc00000, data 0x34fba4c6/0x351fd000, compress 0x0/0x0/0x0, omap 0x7cf09, meta 0x83730f7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 410 handle_osd_map epochs [411,411], i have 410, src has [1,411]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 410 handle_osd_map epochs [411,411], i have 411, src has [1,411]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 220233728 unmapped: 63184896 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 411 heartbeat osd_stat(store_statfs(0x4c16f5000/0x0/0x4ffc00000, data 0x34fba4c6/0x351fd000, compress 0x0/0x0/0x0, omap 0x7cf09, meta 0x83730f7), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 411 handle_osd_map epochs [412,412], i have 411, src has [1,412]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 411 handle_osd_map epochs [412,412], i have 412, src has [1,412]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:20.471509+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.899913311s of 10.044794083s, submitted: 468
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 220119040 unmapped: 63299584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 412 handle_osd_map epochs [412,413], i have 412, src has [1,413]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f20addc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:21.471649+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f4738000 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214990848 unmapped: 68427776 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f2735880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4013500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:22.471750+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 heartbeat osd_stat(store_statfs(0x4d8604000/0x0/0x4ffc00000, data 0x1cbbf7c0/0x1ce03000, compress 0x0/0x0/0x0, omap 0x7d947, meta 0x83726b9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214990848 unmapped: 68427776 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4012fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f2735a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 413 handle_osd_map epochs [413,414], i have 413, src has [1,414]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 ms_handle_reset con 0x55a1f3fdb000 session 0x55a1f41cc540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f24fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:23.471898+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 ms_handle_reset con 0x55a1f3fdb000 session 0x55a1f186f500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f45f36c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213508096 unmapped: 69910528 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3505991 data_alloc: 234881024 data_used: 20135880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:24.472030+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f39d2fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213524480 unmapped: 69894144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 414 handle_osd_map epochs [414,415], i have 414, src has [1,415]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 415 heartbeat osd_stat(store_statfs(0x4ee629000/0x0/0x4ffc00000, data 0x539d38e/0x55e0000, compress 0x0/0x0/0x0, omap 0x7e137, meta 0x8371ec9), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:25.472191+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213532672 unmapped: 69885952 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3bce000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 415 ms_handle_reset con 0x55a1f3bce000 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:26.472335+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213532672 unmapped: 69885952 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 415 handle_osd_map epochs [416,416], i have 415, src has [1,416]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 415 handle_osd_map epochs [415,416], i have 416, src has [1,416]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:27.472470+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 416 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f24ffdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213549056 unmapped: 69869568 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 416 heartbeat osd_stat(store_statfs(0x4f2222000/0x0/0x4ffc00000, data 0x53a0b25/0x55e8000, compress 0x0/0x0/0x0, omap 0x7efc9, meta 0x8371037), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 416 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f2735a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 416 handle_osd_map epochs [416,417], i have 416, src has [1,417]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:28.472603+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 417 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f41ccc40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213565440 unmapped: 69853184 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521144 data_alloc: 234881024 data_used: 20139286
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:29.472768+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213565440 unmapped: 69853184 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdb000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 417 handle_osd_map epochs [418,418], i have 417, src has [1,418]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f3fdb000 session 0x55a1f43ba380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:30.472972+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 69836800 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.983529091s of 10.549967766s, submitted: 368
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f19c7500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:31.473146+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f18ae000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213581824 unmapped: 69836800 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f186f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:32.473258+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213622784 unmapped: 69795840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:33.473418+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4738000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 ms_handle_reset con 0x55a1f4738000 session 0x55a1f2604a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213360640 unmapped: 70057984 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3531640 data_alloc: 234881024 data_used: 20140191
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 418 handle_osd_map epochs [419,419], i have 418, src has [1,419]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f2734fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 heartbeat osd_stat(store_statfs(0x4f221b000/0x0/0x4ffc00000, data 0x53a426e/0x55f1000, compress 0x0/0x0/0x0, omap 0x7fd75, meta 0x837028b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f39d3880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:34.473597+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213385216 unmapped: 70033408 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f45f3c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:35.480770+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213409792 unmapped: 70008832 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 heartbeat osd_stat(store_statfs(0x4f2218000/0x0/0x4ffc00000, data 0x53a5e5e/0x55f4000, compress 0x0/0x0/0x0, omap 0x8035d, meta 0x836fca3), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 ms_handle_reset con 0x55a1f4731400 session 0x55a1f26056c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:36.480988+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 handle_osd_map epochs [420,420], i have 419, src has [1,420]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 419 handle_osd_map epochs [419,420], i have 420, src has [1,420]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4308700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f24ff180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 69976064 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:37.481197+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213442560 unmapped: 69976064 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:38.481362+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213254144 unmapped: 70164480 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3536890 data_alloc: 234881024 data_used: 20140918
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f3bc96c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f19c6e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f3e31880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4231180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:39.481547+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f271ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214204416 unmapped: 69214208 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:40.481782+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4231c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4266c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214204416 unmapped: 69214208 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f43a6380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f473ac00 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.146381378s of 10.397129059s, submitted: 162
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:41.481900+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4320fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f2218000/0x0/0x4ffc00000, data 0x53a798d/0x55f4000, compress 0x0/0x0/0x0, omap 0x81855, meta 0x836e7ab), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214228992 unmapped: 69189632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:42.482087+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214228992 unmapped: 69189632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:43.483328+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214228992 unmapped: 69189632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3535753 data_alloc: 234881024 data_used: 20140788
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:44.484023+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214228992 unmapped: 69189632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 heartbeat osd_stat(store_statfs(0x4f2218000/0x0/0x4ffc00000, data 0x53a798d/0x55f4000, compress 0x0/0x0/0x0, omap 0x8189f, meta 0x836e761), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 420 handle_osd_map epochs [421,421], i have 420, src has [1,421]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:45.484190+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212844544 unmapped: 70574080 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:46.484639+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212852736 unmapped: 70565888 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 421 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:47.484769+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212860928 unmapped: 70557696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f2212000/0x0/0x4ffc00000, data 0x53a946e/0x55f8000, compress 0x0/0x0/0x0, omap 0x819f2, meta 0x836e60e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:48.484952+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212860928 unmapped: 70557696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3540889 data_alloc: 234881024 data_used: 20188916
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:49.485132+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212860928 unmapped: 70557696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:50.485271+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212860928 unmapped: 70557696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 421 heartbeat osd_stat(store_statfs(0x4f2212000/0x0/0x4ffc00000, data 0x53a946e/0x55f8000, compress 0x0/0x0/0x0, omap 0x819f2, meta 0x836e60e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:51.485479+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212860928 unmapped: 70557696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.526815414s of 10.612287521s, submitted: 22
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 421 ms_handle_reset con 0x55a1f4731400 session 0x55a1f2735180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 421 handle_osd_map epochs [422,422], i have 421, src has [1,422]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:52.485763+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3f95800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 422 ms_handle_reset con 0x55a1f3f95800 session 0x55a1f3b03880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212885504 unmapped: 70533120 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 422 handle_osd_map epochs [423,423], i have 422, src has [1,423]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 423 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:53.486012+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 423 ms_handle_reset con 0x55a1f473c800 session 0x55a1f4012fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212893696 unmapped: 70524928 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554806 data_alloc: 234881024 data_used: 20189014
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 423 handle_osd_map epochs [423,424], i have 423, src has [1,424]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 424 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f39d21c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:54.486107+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 424 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41e0700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3f95800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212893696 unmapped: 70524928 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 424 handle_osd_map epochs [424,425], i have 424, src has [1,425]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:55.486419+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 425 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212910080 unmapped: 70508544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:56.486551+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 425 handle_osd_map epochs [426,426], i have 425, src has [1,426]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f42668c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f3f95800 session 0x55a1f2604540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213049344 unmapped: 70369280 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 heartbeat osd_stat(store_statfs(0x4f19ff000/0x0/0x4ffc00000, data 0x5bb1fb0/0x5e0b000, compress 0x0/0x0/0x0, omap 0x82ba1, meta 0x836d45f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f3e31500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:57.486803+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f45f2380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f45f2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f473c800 session 0x55a1f4013500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f18afdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213360640 unmapped: 70057984 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:58.486986+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 426 handle_osd_map epochs [426,427], i have 426, src has [1,427]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 427 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4231180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213590016 unmapped: 69828608 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3633455 data_alloc: 234881024 data_used: 22556502
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3f95800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 427 ms_handle_reset con 0x55a1f3f95800 session 0x55a1f24ff880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 427 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f4361c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:23:59.487123+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213606400 unmapped: 69812224 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 427 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:00.487276+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 427 handle_osd_map epochs [428,428], i have 427, src has [1,428]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3f95800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 428 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f4230380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 428 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41d0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213721088 unmapped: 69697536 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 428 heartbeat osd_stat(store_statfs(0x4f19fd000/0x0/0x4ffc00000, data 0x5bb3c02/0x5e0f000, compress 0x0/0x0/0x0, omap 0x83b8f, meta 0x836c471), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:01.487476+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 428 handle_osd_map epochs [428,429], i have 428, src has [1,429]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 429 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f43096c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 429 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4392fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 429 ms_handle_reset con 0x55a1f3f95800 session 0x55a1f45f2540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213737472 unmapped: 69681152 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.477621078s of 10.009551048s, submitted: 141
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:02.487702+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213786624 unmapped: 69632000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:03.487938+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 429 handle_osd_map epochs [430,430], i have 429, src has [1,430]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f41e08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f3bc9dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f43a6c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 69574656 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3644261 data_alloc: 234881024 data_used: 22557115
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4230540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:04.488164+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 ms_handle_reset con 0x55a1f473c800 session 0x55a1f4309500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213843968 unmapped: 69574656 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 430 handle_osd_map epochs [430,431], i have 430, src has [1,431]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:05.488366+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 431 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4221dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 431 heartbeat osd_stat(store_statfs(0x4f19ef000/0x0/0x4ffc00000, data 0x5bbadbf/0x5e1b000, compress 0x0/0x0/0x0, omap 0x84fce, meta 0x836b032), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212205568 unmapped: 71213056 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:06.488507+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 431 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f4012700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 431 handle_osd_map epochs [432,432], i have 431, src has [1,432]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 431 handle_osd_map epochs [431,432], i have 432, src has [1,432]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 ms_handle_reset con 0x55a1f473c800 session 0x55a1f41cd180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 ms_handle_reset con 0x55a1f4731400 session 0x55a1f18af880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f4231500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 212230144 unmapped: 71188480 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41cd500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f4308fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:07.488693+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 432 handle_osd_map epochs [433,433], i have 432, src has [1,433]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f3bc8c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f42f6c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f2604c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f42f6c00 session 0x55a1f186f340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4267880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213327872 unmapped: 70090752 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 2400.1 total, 600.0 interval
                                           Cumulative writes: 28K writes, 110K keys, 28K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.03 MB/s
                                           Cumulative WAL: 28K writes, 10K syncs, 2.69 writes per sync, written: 0.08 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 44.92 MB, 0.07 MB/s
                                           Interval WAL: 16K writes, 6708 syncs, 2.40 writes per sync, written: 0.04 GB, 0.07 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 heartbeat osd_stat(store_statfs(0x4f19e3000/0x0/0x4ffc00000, data 0x5bbcb67/0x5e25000, compress 0x0/0x0/0x0, omap 0x858df, meta 0x836a721), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:08.488938+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213327872 unmapped: 70090752 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668127 data_alloc: 234881024 data_used: 22562291
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f24fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f41e16c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:09.489158+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4231500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213393408 unmapped: 70025216 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 433 handle_osd_map epochs [434,434], i have 433, src has [1,434]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 434 ms_handle_reset con 0x55a1f4731400 session 0x55a1f41d0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:10.489310+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 434 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f45f2380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 434 heartbeat osd_stat(store_statfs(0x4f19e2000/0x0/0x4ffc00000, data 0x5bc018c/0x5e28000, compress 0x0/0x0/0x0, omap 0x86b1e, meta 0x83694e2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213131264 unmapped: 70287360 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:11.489502+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 434 handle_osd_map epochs [435,435], i have 434, src has [1,435]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f4308540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f26ffa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f41ccfc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213155840 unmapped: 70262784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:12.489592+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.352267265s of 10.874969482s, submitted: 222
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f473c800 session 0x55a1f26fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213172224 unmapped: 70246400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f43a6a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f19e2000/0x0/0x4ffc00000, data 0x5bc1c72/0x5e28000, compress 0x0/0x0/0x0, omap 0x86e6c, meta 0x8369194), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:13.489752+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213188608 unmapped: 70230016 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3668131 data_alloc: 234881024 data_used: 22562465
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f41cc1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:14.489863+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 heartbeat osd_stat(store_statfs(0x4f19e3000/0x0/0x4ffc00000, data 0x5bc1c62/0x5e27000, compress 0x0/0x0/0x0, omap 0x86e22, meta 0x83691de), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213188608 unmapped: 70230016 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:15.490004+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 435 handle_osd_map epochs [436,436], i have 435, src has [1,436]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213196800 unmapped: 70221824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:16.490141+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213196800 unmapped: 70221824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 436 heartbeat osd_stat(store_statfs(0x4f19e0000/0x0/0x4ffc00000, data 0x5bc3735/0x5e2a000, compress 0x0/0x0/0x0, omap 0x8752d, meta 0x8368ad3), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:17.490285+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 436 handle_osd_map epochs [437,437], i have 436, src has [1,437]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213196800 unmapped: 70221824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 437 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4361500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:18.490423+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213204992 unmapped: 70213632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675328 data_alloc: 234881024 data_used: 22562465
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 437 handle_osd_map epochs [437,438], i have 437, src has [1,438]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:19.490656+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 438 ms_handle_reset con 0x55a1f473c800 session 0x55a1f3b03180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 438 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f42676c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 438 ms_handle_reset con 0x55a1f304f800 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213229568 unmapped: 70189056 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:20.491025+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 438 heartbeat osd_stat(store_statfs(0x4f19d6000/0x0/0x4ffc00000, data 0x5bc6f17/0x5e32000, compress 0x0/0x0/0x0, omap 0x87ad0, meta 0x8368530), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 213229568 unmapped: 70189056 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 438 handle_osd_map epochs [439,439], i have 438, src has [1,439]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:21.491174+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 439 handle_osd_map epochs [440,440], i have 439, src has [1,440]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f24fe000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f3e30c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214294528 unmapped: 69124096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f4309a40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:22.491315+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453ec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f453ec00 session 0x55a1f43616c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f473c800 session 0x55a1f186fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214327296 unmapped: 69091328 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:23.491502+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.451820374s of 10.798442841s, submitted: 120
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f3e31dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214441984 unmapped: 68976640 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3702485 data_alloc: 234881024 data_used: 24385786
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1efddf180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:24.491648+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f43a6fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 440 handle_osd_map epochs [441,441], i have 440, src has [1,441]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4309c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214441984 unmapped: 68976640 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:25.491757+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f39d2e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f4308fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 handle_osd_map epochs [442,442], i have 441, src has [1,442]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 441 handle_osd_map epochs [441,442], i have 442, src has [1,442]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 ms_handle_reset con 0x55a1f473c800 session 0x55a1f267f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4221500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214466560 unmapped: 68952064 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 heartbeat osd_stat(store_statfs(0x4f19cf000/0x0/0x4ffc00000, data 0x5bcc11b/0x5e3a000, compress 0x0/0x0/0x0, omap 0x88af0, meta 0x8367510), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f33befc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:26.491886+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f45f2000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 ms_handle_reset con 0x55a1f304f800 session 0x55a1f43616c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214130688 unmapped: 69287936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:27.491964+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 442 handle_osd_map epochs [442,443], i have 442, src has [1,443]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214261760 unmapped: 69156864 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:28.492108+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 443 handle_osd_map epochs [444,444], i have 443, src has [1,444]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214401024 unmapped: 69017600 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717324 data_alloc: 234881024 data_used: 24386564
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f26fee00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f473c800 session 0x55a1f186ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4308fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:29.492229+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4267dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214859776 unmapped: 68558848 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd7800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f3fd7800 session 0x55a1f271f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41e1880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f228ac00 session 0x55a1efdde380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:30.492397+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 ms_handle_reset con 0x55a1f304f800 session 0x55a1f41cd180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 444 handle_osd_map epochs [445,445], i have 444, src has [1,445]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215113728 unmapped: 68304896 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 445 ms_handle_reset con 0x55a1f473c800 session 0x55a1f41e0700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd9000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 445 heartbeat osd_stat(store_statfs(0x4f1dc8000/0x0/0x4ffc00000, data 0x53d149b/0x5643000, compress 0x0/0x0/0x0, omap 0x8af68, meta 0x8365098), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:31.492523+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 445 ms_handle_reset con 0x55a1f3fd9000 session 0x55a1f42201c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 445 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 445 handle_osd_map epochs [446,446], i have 445, src has [1,446]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215203840 unmapped: 68214784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 446 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4220a80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:32.492682+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 446 ms_handle_reset con 0x55a1f304f800 session 0x55a1f614e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4545c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 446 ms_handle_reset con 0x55a1f473c800 session 0x55a1f20ad180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 446 handle_osd_map epochs [447,447], i have 446, src has [1,447]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 214892544 unmapped: 68526080 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 447 ms_handle_reset con 0x55a1f4545c00 session 0x55a1f24fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:33.492876+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.287234306s of 10.170970917s, submitted: 277
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 447 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196d6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 447 ms_handle_reset con 0x55a1f304f800 session 0x55a1f26fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3674839 data_alloc: 234881024 data_used: 23367898
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215031808 unmapped: 68386816 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:34.493028+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215080960 unmapped: 68337664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 447 handle_osd_map epochs [447,448], i have 447, src has [1,448]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 447 handle_osd_map epochs [448,448], i have 448, src has [1,448]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:35.493151+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 ms_handle_reset con 0x55a1f473c800 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f41e0380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215146496 unmapped: 68272128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4544000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 ms_handle_reset con 0x55a1f4544000 session 0x55a1f434afc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f422ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 handle_osd_map epochs [449,449], i have 448, src has [1,449]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 448 handle_osd_map epochs [448,449], i have 449, src has [1,449]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:36.493321+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 449 ms_handle_reset con 0x55a1f3fd8400 session 0x55a1f3e30540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 449 ms_handle_reset con 0x55a1f473c800 session 0x55a1f2604000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 449 heartbeat osd_stat(store_statfs(0x4f21bb000/0x0/0x4ffc00000, data 0x53da007/0x564d000, compress 0x0/0x0/0x0, omap 0x8d3c2, meta 0x8362c3e), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215326720 unmapped: 68091904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 449 handle_osd_map epochs [450,450], i have 449, src has [1,450]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:37.493443+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 ms_handle_reset con 0x55a1f304f800 session 0x55a1f42668c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f43a6fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215334912 unmapped: 68083712 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:38.493589+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41d1dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f904c380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3675859 data_alloc: 234881024 data_used: 23367686
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215351296 unmapped: 68067328 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:39.493713+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215351296 unmapped: 68067328 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4361340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:40.493861+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 450 handle_osd_map epochs [450,451], i have 450, src has [1,451]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215351296 unmapped: 68067328 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:41.493992+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473c800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 451 ms_handle_reset con 0x55a1f3fd8400 session 0x55a1f614ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 451 ms_handle_reset con 0x55a1f473c800 session 0x55a1f267ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 451 heartbeat osd_stat(store_statfs(0x4f21b9000/0x0/0x4ffc00000, data 0x53dd671/0x5651000, compress 0x0/0x0/0x0, omap 0x8e3cd, meta 0x8361c33), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 451 handle_osd_map epochs [452,452], i have 451, src has [1,452]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 451 handle_osd_map epochs [452,452], i have 452, src has [1,452]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215359488 unmapped: 68059136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:42.494157+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 452 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4013c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 452 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4230380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215359488 unmapped: 68059136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:43.494322+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.686581612s of 10.039501190s, submitted: 219
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 452 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4230540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3687215 data_alloc: 234881024 data_used: 23367686
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215359488 unmapped: 68059136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:44.494458+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 452 handle_osd_map epochs [452,453], i have 452, src has [1,453]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 ms_handle_reset con 0x55a1f3fd8400 session 0x55a1f4012700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f187b400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 ms_handle_reset con 0x55a1f187b400 session 0x55a1f614e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215392256 unmapped: 68026368 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:45.494631+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f3bc9180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41ccfc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 handle_osd_map epochs [454,454], i have 453, src has [1,454]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 453 handle_osd_map epochs [453,454], i have 454, src has [1,454]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 454 ms_handle_reset con 0x55a1f4733400 session 0x55a1efddf180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215408640 unmapped: 68009984 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:46.494848+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 454 heartbeat osd_stat(store_statfs(0x4f21b1000/0x0/0x4ffc00000, data 0x53e0ec5/0x5659000, compress 0x0/0x0/0x0, omap 0x8e86b, meta 0x8361795), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 454 ms_handle_reset con 0x55a1f304f800 session 0x55a1f614ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fd8400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 454 ms_handle_reset con 0x55a1f3fd8400 session 0x55a1f45f3dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215408640 unmapped: 68009984 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:47.494985+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 454 handle_osd_map epochs [455,455], i have 454, src has [1,455]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 455 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f267efc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 455 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4220fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215457792 unmapped: 67960832 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 455 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f434aa80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:48.495156+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 455 handle_osd_map epochs [456,456], i have 455, src has [1,456]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701743 data_alloc: 234881024 data_used: 23369013
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215465984 unmapped: 67952640 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f16e6000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 456 ms_handle_reset con 0x55a1f4733400 session 0x55a1f4308540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 456 ms_handle_reset con 0x55a1f16e6000 session 0x55a1f4230e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:49.495341+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 456 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f26048c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 456 handle_osd_map epochs [457,457], i have 456, src has [1,457]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 457 ms_handle_reset con 0x55a1f304f800 session 0x55a1f4320700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215465984 unmapped: 67952640 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 457 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f19c76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:50.495553+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473d800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 457 ms_handle_reset con 0x55a1f473d800 session 0x55a1f4220700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 457 heartbeat osd_stat(store_statfs(0x4f21a7000/0x0/0x4ffc00000, data 0x53e7f14/0x5665000, compress 0x0/0x0/0x0, omap 0x90115, meta 0x835feeb), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 457 handle_osd_map epochs [458,458], i have 457, src has [1,458]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 ms_handle_reset con 0x55a1f4737000 session 0x55a1f3e301c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215498752 unmapped: 67919872 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 ms_handle_reset con 0x55a1f4733400 session 0x55a1f4309180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 heartbeat osd_stat(store_statfs(0x4f21a7000/0x0/0x4ffc00000, data 0x53e7f14/0x5665000, compress 0x0/0x0/0x0, omap 0x90115, meta 0x835feeb), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:51.495691+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215498752 unmapped: 67919872 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f3bc8c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:52.496258+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215515136 unmapped: 67903488 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:53.496431+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3704384 data_alloc: 234881024 data_used: 23370126
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215515136 unmapped: 67903488 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:54.496564+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.103281021s of 10.606328964s, submitted: 182
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f422f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 ms_handle_reset con 0x55a1f304f800 session 0x55a1f43a76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215515136 unmapped: 67903488 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 heartbeat osd_stat(store_statfs(0x4f21a6000/0x0/0x4ffc00000, data 0x53e9a0a/0x5666000, compress 0x0/0x0/0x0, omap 0x9070b, meta 0x835f8f5), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:55.496720+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 458 handle_osd_map epochs [459,459], i have 458, src has [1,459]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215531520 unmapped: 67887104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:56.496814+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 459 heartbeat osd_stat(store_statfs(0x4f21a5000/0x0/0x4ffc00000, data 0x53e9a1a/0x5667000, compress 0x0/0x0/0x0, omap 0x909ae, meta 0x835f652), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473d800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f386e000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 459 handle_osd_map epochs [460,460], i have 459, src has [1,460]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 460 ms_handle_reset con 0x55a1f386e000 session 0x55a1f422e700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 460 ms_handle_reset con 0x55a1f4737000 session 0x55a1f41e1500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215678976 unmapped: 67739648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:57.496976+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 460 handle_osd_map epochs [461,461], i have 460, src has [1,461]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 461 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f4221340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215695360 unmapped: 67723264 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:58.497131+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 461 handle_osd_map epochs [462,462], i have 461, src has [1,462]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f41cd340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f304f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728382 data_alloc: 234881024 data_used: 23370430
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215695360 unmapped: 67723264 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 ms_handle_reset con 0x55a1f473d800 session 0x55a1f267fa40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:24:59.497262+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 ms_handle_reset con 0x55a1f304f800 session 0x55a1f186ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215711744 unmapped: 67706880 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:00.497397+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f2197000/0x0/0x4ffc00000, data 0x53f08a1/0x5675000, compress 0x0/0x0/0x0, omap 0x9199f, meta 0x835e661), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 heartbeat osd_stat(store_statfs(0x4f2197000/0x0/0x4ffc00000, data 0x53f08a1/0x5675000, compress 0x0/0x0/0x0, omap 0x9199f, meta 0x835e661), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215711744 unmapped: 67706880 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:01.497560+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 handle_osd_map epochs [462,463], i have 462, src has [1,463]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 462 handle_osd_map epochs [463,463], i have 463, src has [1,463]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215711744 unmapped: 67706880 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:02.497681+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 463 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196d6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215719936 unmapped: 67698688 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:03.497855+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 463 heartbeat osd_stat(store_statfs(0x4f2195000/0x0/0x4ffc00000, data 0x53f22be/0x5677000, compress 0x0/0x0/0x0, omap 0x91cf6, meta 0x835e30a), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3728734 data_alloc: 234881024 data_used: 23370702
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215719936 unmapped: 67698688 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:04.498007+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 6.983101368s of 10.034818649s, submitted: 69
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 463 ms_handle_reset con 0x55a1f4737000 session 0x55a1f26fefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 463 handle_osd_map epochs [464,464], i have 463, src has [1,464]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215719936 unmapped: 67698688 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:05.498133+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4361c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215744512 unmapped: 67674112 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:06.498286+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473d800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 ms_handle_reset con 0x55a1f473d800 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4731c00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 ms_handle_reset con 0x55a1f4731c00 session 0x55a1f40136c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215752704 unmapped: 67665920 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:07.498404+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f2192000/0x0/0x4ffc00000, data 0x53f3eae/0x567a000, compress 0x0/0x0/0x0, omap 0x9238c, meta 0x835dc74), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 heartbeat osd_stat(store_statfs(0x4f2193000/0x0/0x4ffc00000, data 0x53f3e4c/0x5679000, compress 0x0/0x0/0x0, omap 0x9254e, meta 0x835dab2), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 464 handle_osd_map epochs [465,465], i have 464, src has [1,465]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f19c76c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f18aea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215785472 unmapped: 67633152 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:08.498529+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 ms_handle_reset con 0x55a1f4737000 session 0x55a1f20ad180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 ms_handle_reset con 0x55a1f4733400 session 0x55a1f18aec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730289 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:09.498681+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 heartbeat osd_stat(store_statfs(0x4f2192000/0x0/0x4ffc00000, data 0x53f5a38/0x567a000, compress 0x0/0x0/0x0, omap 0x92777, meta 0x835d889), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:10.498822+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:11.498957+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f473d800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:12.499103+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 ms_handle_reset con 0x55a1f473d800 session 0x55a1f41e0540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 465 handle_osd_map epochs [466,466], i have 465, src has [1,466]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:13.499264+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3735535 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:14.499370+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 466 handle_osd_map epochs [467,467], i have 466, src has [1,467]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.370454788s of 10.347215652s, submitted: 135
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196ce00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215826432 unmapped: 67592192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:15.499528+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 heartbeat osd_stat(store_statfs(0x4f2187000/0x0/0x4ffc00000, data 0x53f910d/0x5683000, compress 0x0/0x0/0x0, omap 0x92971, meta 0x835d68f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215834624 unmapped: 67584000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:16.499676+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f18aefc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 ms_handle_reset con 0x55a1f4733400 session 0x55a1f434bc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 handle_osd_map epochs [468,468], i have 467, src has [1,468]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 467 handle_osd_map epochs [467,468], i have 468, src has [1,468]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 215842816 unmapped: 67575808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:17.499821+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 ms_handle_reset con 0x55a1f4737000 session 0x55a1f434ac40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 ms_handle_reset con 0x55a1f453f800 session 0x55a1f26fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:18.500114+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741552 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:19.500352+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:20.500505+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f2604c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:21.500659+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2189000/0x0/0x4ffc00000, data 0x53fab0a/0x5683000, compress 0x0/0x0/0x0, omap 0x92ffc, meta 0x835d004), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:22.500877+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2189000/0x0/0x4ffc00000, data 0x53fab0a/0x5683000, compress 0x0/0x0/0x0, omap 0x92ffc, meta 0x835d004), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2189000/0x0/0x4ffc00000, data 0x53fab0a/0x5683000, compress 0x0/0x0/0x0, omap 0x92ffc, meta 0x835d004), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:23.501033+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2189000/0x0/0x4ffc00000, data 0x53fab0a/0x5683000, compress 0x0/0x0/0x0, omap 0x92ffc, meta 0x835d004), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3741552 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:24.501303+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:25.501466+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2189000/0x0/0x4ffc00000, data 0x53fab0a/0x5683000, compress 0x0/0x0/0x0, omap 0x92ffc, meta 0x835d004), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:26.501598+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216956928 unmapped: 66461696 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:27.501729+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.534582138s of 12.793689728s, submitted: 141
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f20ac700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216965120 unmapped: 66453504 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:28.501920+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3745084 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216973312 unmapped: 66445312 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 heartbeat osd_stat(store_statfs(0x4f2188000/0x0/0x4ffc00000, data 0x53fab1a/0x5684000, compress 0x0/0x0/0x0, omap 0x9329f, meta 0x835cd61), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:29.502172+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 handle_osd_map epochs [469,469], i have 468, src has [1,469]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 468 handle_osd_map epochs [468,469], i have 469, src has [1,469]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 469 ms_handle_reset con 0x55a1f453f800 session 0x55a1f4267880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216981504 unmapped: 66437120 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:30.502309+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216981504 unmapped: 66437120 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:31.502505+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:32.502724+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 216989696 unmapped: 66428928 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 469 ms_handle_reset con 0x55a1f4733400 session 0x55a1f267e1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4737000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 469 handle_osd_map epochs [470,470], i have 469, src has [1,470]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 ms_handle_reset con 0x55a1f4737000 session 0x55a1f196ddc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f614ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f18ae380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:33.502916+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3749769 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:34.503125+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 heartbeat osd_stat(store_statfs(0x4f2181000/0x0/0x4ffc00000, data 0x53fe296/0x5689000, compress 0x0/0x0/0x0, omap 0x93a77, meta 0x835c589), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 ms_handle_reset con 0x55a1f453f800 session 0x55a1f3bc8c40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:35.503273+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:36.503430+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 470 handle_osd_map epochs [471,471], i have 470, src has [1,471]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 471 ms_handle_reset con 0x55a1f4733400 session 0x55a1f26048c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:37.503581+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fb2800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 471 handle_osd_map epochs [472,472], i have 471, src has [1,472]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.520827293s of 10.221014023s, submitted: 59
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 472 ms_handle_reset con 0x55a1f3fb2800 session 0x55a1f19c7180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:38.503732+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217047040 unmapped: 66371584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 472 heartbeat osd_stat(store_statfs(0x4f217d000/0x0/0x4ffc00000, data 0x53ffeb0/0x568d000, compress 0x0/0x0/0x0, omap 0x93b75, meta 0x835c48b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 472 handle_osd_map epochs [472,473], i have 472, src has [1,473]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760017 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:39.503999+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:40.504149+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 473 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f20adc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:41.504338+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:42.504565+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:43.504751+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 473 heartbeat osd_stat(store_statfs(0x4f2178000/0x0/0x4ffc00000, data 0x54034bd/0x5692000, compress 0x0/0x0/0x0, omap 0x94304, meta 0x835bcfc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758091 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:44.504887+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:45.505136+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217055232 unmapped: 66363392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 473 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f24fe8c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:46.505273+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217063424 unmapped: 66355200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:47.505462+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217063424 unmapped: 66355200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:48.505627+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217063424 unmapped: 66355200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 473 handle_osd_map epochs [474,474], i have 473, src has [1,474]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.171836853s of 10.790860176s, submitted: 33
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 473 heartbeat osd_stat(store_statfs(0x4f217a000/0x0/0x4ffc00000, data 0x54034bd/0x5692000, compress 0x0/0x0/0x0, omap 0x94304, meta 0x835bcfc), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3760865 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:49.505793+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217063424 unmapped: 66355200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:50.505945+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217063424 unmapped: 66355200 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 474 ms_handle_reset con 0x55a1f453f800 session 0x55a1f18ae540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:51.506109+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217096192 unmapped: 66322432 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:52.506279+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217104384 unmapped: 66314240 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 474 handle_osd_map epochs [475,475], i have 474, src has [1,475]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:53.506426+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 475 ms_handle_reset con 0x55a1f4733400 session 0x55a1f41e1500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 475 heartbeat osd_stat(store_statfs(0x4f2175000/0x0/0x4ffc00000, data 0x5404fae/0x5697000, compress 0x0/0x0/0x0, omap 0x94403, meta 0x835bbfd), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217120768 unmapped: 66297856 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3767112 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:54.506578+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217120768 unmapped: 66297856 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 475 ms_handle_reset con 0x55a1f4733000 session 0x55a1f4308e00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:55.506759+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 475 handle_osd_map epochs [476,476], i have 475, src has [1,476]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f267e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f434b6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:56.506938+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 ms_handle_reset con 0x55a1f453f800 session 0x55a1f267f6c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:57.507111+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:58.507300+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 heartbeat osd_stat(store_statfs(0x4f216f000/0x0/0x4ffc00000, data 0x54086c8/0x569b000, compress 0x0/0x0/0x0, omap 0x94b97, meta 0x835b469), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:25:59.507511+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768366 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 ms_handle_reset con 0x55a1f4733400 session 0x55a1f434b880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:00.507704+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217128960 unmapped: 66289664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:01.507965+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 476 handle_osd_map epochs [477,477], i have 476, src has [1,477]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.177330971s of 12.815442085s, submitted: 59
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217137152 unmapped: 66281472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:02.508188+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217137152 unmapped: 66281472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 477 ms_handle_reset con 0x55a1f5411000 session 0x55a1f41cd180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 477 heartbeat osd_stat(store_statfs(0x4f216a000/0x0/0x4ffc00000, data 0x540a280/0x569e000, compress 0x0/0x0/0x0, omap 0x94c97, meta 0x835b369), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:03.508452+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217137152 unmapped: 66281472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 477 handle_osd_map epochs [478,478], i have 477, src has [1,478]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:04.508576+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3773914 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 478 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f196c1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217153536 unmapped: 66265088 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:05.508794+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217153536 unmapped: 66265088 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 478 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f43616c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 478 heartbeat osd_stat(store_statfs(0x4f2169000/0x0/0x4ffc00000, data 0x540bf1a/0x56a3000, compress 0x0/0x0/0x0, omap 0x94d97, meta 0x835b269), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:06.508977+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217153536 unmapped: 66265088 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:07.509117+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 478 handle_osd_map epochs [479,479], i have 478, src has [1,479]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 479 ms_handle_reset con 0x55a1f453f800 session 0x55a1f26fec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217153536 unmapped: 66265088 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 479 handle_osd_map epochs [479,480], i have 479, src has [1,480]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:08.509412+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217161728 unmapped: 66256896 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 480 ms_handle_reset con 0x55a1f4733400 session 0x55a1f614e540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f5411000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:09.509610+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781999 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 480 handle_osd_map epochs [480,481], i have 480, src has [1,481]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f5411000 session 0x55a1f267f880
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217178112 unmapped: 66240512 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:10.509782+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f215e000/0x0/0x4ffc00000, data 0x5411141/0x56ac000, compress 0x0/0x0/0x0, omap 0x95632, meta 0x835a9ce), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217186304 unmapped: 66232320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:11.509935+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217186304 unmapped: 66232320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41cc380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.141374588s of 10.542802811s, submitted: 69
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f422ec40
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:12.510124+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217186304 unmapped: 66232320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f453f800 session 0x55a1f33befc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:13.510302+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217186304 unmapped: 66232320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:14.510464+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3786709 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f4733400 session 0x55a1f26fe380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217186304 unmapped: 66232320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f186fdc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 heartbeat osd_stat(store_statfs(0x4f2162000/0x0/0x4ffc00000, data 0x54110cf/0x56aa000, compress 0x0/0x0/0x0, omap 0x95632, meta 0x835a9ce), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:15.510664+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217219072 unmapped: 66199552 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41e08c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:16.510882+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217219072 unmapped: 66199552 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:17.511188+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217219072 unmapped: 66199552 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f4266fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 handle_osd_map epochs [482,482], i have 481, src has [1,482]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 481 handle_osd_map epochs [481,482], i have 482, src has [1,482]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:18.511403+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f3bc88c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217235456 unmapped: 66183168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:19.511557+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3789877 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217235456 unmapped: 66183168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:20.511768+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217243648 unmapped: 66174976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 ms_handle_reset con 0x55a1f453f800 session 0x55a1f4361340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:21.512010+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217243648 unmapped: 66174976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215f000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4733400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.921978951s of 10.329168320s, submitted: 70
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 ms_handle_reset con 0x55a1f4733400 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:22.512246+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217243648 unmapped: 66174976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:23.512467+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f2604fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f422f180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:24.512635+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:25.512768+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:26.512932+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:27.513129+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:28.513299+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:29.513429+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:30.513623+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:31.513788+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:32.786703+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:33.786929+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:34.787132+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:35.787364+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:36.787598+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:37.787822+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:38.787971+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:39.788170+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:40.788370+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:41.788653+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:42.788931+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:43.789170+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217260032 unmapped: 66158592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:44.790339+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:45.790526+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:46.790718+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:47.790834+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:48.790966+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:49.791108+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:50.791303+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:51.791449+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:52.792113+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:53.792957+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3791142 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:54.793701+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 heartbeat osd_stat(store_statfs(0x4f215d000/0x0/0x4ffc00000, data 0x5412b4e/0x56ad000, compress 0x0/0x0/0x0, omap 0x95ccf, meta 0x835a331), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:55.794261+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:56.794507+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:57.794713+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:58.794873+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 217268224 unmapped: 66150400 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 482 handle_osd_map epochs [483,483], i have 482, src has [1,483]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 36.867851257s of 36.899627686s, submitted: 15
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f186ea80
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3794391 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:26:59.795174+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218324992 unmapped: 65093632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453f800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 heartbeat osd_stat(store_statfs(0x4f2159000/0x0/0x4ffc00000, data 0x541474c/0x56b1000, compress 0x0/0x0/0x0, omap 0x95dd1, meta 0x835a22f), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 ms_handle_reset con 0x55a1f453f800 session 0x55a1f41cd340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:00.795346+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218333184 unmapped: 65085440 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f904c540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:01.795521+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218333184 unmapped: 65085440 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:02.795654+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218333184 unmapped: 65085440 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f228ac00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f4267dc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 483 handle_osd_map epochs [484,484], i have 483, src has [1,484]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f41cd500
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f26fee00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 ms_handle_reset con 0x55a1f228ac00 session 0x55a1f18ae1c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 heartbeat osd_stat(store_statfs(0x4f2156000/0x0/0x4ffc00000, data 0x541633c/0x56b4000, compress 0x0/0x0/0x0, omap 0x95ed3, meta 0x835a12d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:03.795886+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 65052672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41d1180
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3797371 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:04.796009+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 65052672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:05.796172+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 65052672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:06.796357+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 65052672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:07.796529+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218365952 unmapped: 65052672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 handle_osd_map epochs [485,485], i have 484, src has [1,485]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 484 handle_osd_map epochs [484,485], i have 485, src has [1,485]
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2157000/0x0/0x4ffc00000, data 0x54162da/0x56b3000, compress 0x0/0x0/0x0, omap 0x95ed3, meta 0x835a12d), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:08.796691+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218382336 unmapped: 65036288 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3800145 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:09.796815+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218382336 unmapped: 65036288 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.995301247s of 11.108914375s, submitted: 51
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:10.796988+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:11.797391+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2155000/0x0/0x4ffc00000, data 0x5417dbc/0x56b7000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:12.797543+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:13.797705+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:14.797870+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3801722 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:15.798122+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:16.798391+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:17.798617+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2155000/0x0/0x4ffc00000, data 0x5417dbc/0x56b7000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2155000/0x0/0x4ffc00000, data 0x5417dbc/0x56b7000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:18.798819+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218390528 unmapped: 65028096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:19.798933+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3801722 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2155000/0x0/0x4ffc00000, data 0x5417dbc/0x56b7000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218398720 unmapped: 65019904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:20.799150+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218398720 unmapped: 65019904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:21.799347+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218398720 unmapped: 65019904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:22.799524+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218398720 unmapped: 65019904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:23.799724+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.617762566s of 13.626934052s, submitted: 4
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218398720 unmapped: 65019904 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2155000/0x0/0x4ffc00000, data 0x5417dbc/0x56b7000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:24.799951+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3844030 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 222601216 unmapped: 60817408 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:25.800171+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218415104 unmapped: 65003520 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f43a6380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:26.800359+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0555000/0x0/0x4ffc00000, data 0x7017d59/0x72b6000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 64987136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:27.800540+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 64987136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:28.800720+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 64987136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:29.800971+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005526 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218431488 unmapped: 64987136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d59/0x7caf000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:30.801163+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:31.801337+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:32.801494+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:33.801673+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:34.801813+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4005526 data_alloc: 234881024 data_used: 23371283
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:35.801951+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d59/0x7caf000, compress 0x0/0x0/0x0, omap 0x95fd5, meta 0x835a02b), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:36.802129+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:37.802282+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f45ccc00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f45ccc00 session 0x55a1f4230fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218259456 unmapped: 65159168 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:38.802482+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f41d0000
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218267648 unmapped: 65150976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f4266540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.344336510s of 15.637101173s, submitted: 24
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:39.802651+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f4320380
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4007224 data_alloc: 234881024 data_used: 23375280
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218267648 unmapped: 65150976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:40.802779+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218267648 unmapped: 65150976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:41.802899+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218267648 unmapped: 65150976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:42.803031+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218267648 unmapped: 65150976 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:43.803252+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 218275840 unmapped: 65142784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:44.803705+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4087992 data_alloc: 251658240 data_used: 37042516
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:45.803836+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:46.803981+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:47.804118+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:48.804290+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:49.804513+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4087992 data_alloc: 251658240 data_used: 37042516
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230440960 unmapped: 52977664 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:50.804722+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230449152 unmapped: 52969472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:51.804862+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 52936704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:52.805009+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 52936704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:53.805205+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 230481920 unmapped: 52936704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:54.805367+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4092472 data_alloc: 251658240 data_used: 38066004
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 231014400 unmapped: 52404224 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.874040604s of 15.905458450s, submitted: 2
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:55.805506+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb5c000/0x0/0x4ffc00000, data 0x7a10d68/0x7cb0000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239517696 unmapped: 43900928 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:56.805659+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239534080 unmapped: 43884544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:57.805805+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 46751744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:58.805971+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 46751744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:27:59.806193+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4095204 data_alloc: 251658240 data_used: 37801812
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236666880 unmapped: 46751744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:00.806357+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4efb57000/0x0/0x4ffc00000, data 0x7a15d68/0x7cb5000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [0,0,0,0,0,0,0,0,0,5])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236978176 unmapped: 46440448 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:01.806519+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236978176 unmapped: 46440448 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:02.806655+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239337472 unmapped: 44081152 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:03.806859+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238354432 unmapped: 45064192 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:04.806982+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4123180 data_alloc: 251658240 data_used: 37780820
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238526464 unmapped: 44892160 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c4000/0x0/0x4ffc00000, data 0x7fa8d68/0x8248000, compress 0x0/0x0/0x0, omap 0x9606b, meta 0x8359f95), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:05.807103+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 7.656922817s of 10.446869850s, submitted: 79
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238551040 unmapped: 44867584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f41cd340
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f4266fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:06.807254+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f26056c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:07.807398+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:08.807537+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:09.807757+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129327 data_alloc: 251658240 data_used: 37756244
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:10.807926+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:11.808141+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:12.808277+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:13.808426+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:14.808556+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129327 data_alloc: 251658240 data_used: 37756244
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:15.808788+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:16.808972+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:17.809114+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:18.809281+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:19.809401+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129327 data_alloc: 251658240 data_used: 37756244
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:20.809529+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:21.809666+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:22.809860+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:23.810076+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:24.810226+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129327 data_alloc: 251658240 data_used: 37756244
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:25.810365+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:26.810532+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f3b02540
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:27.810677+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f2734700
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:28.810790+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96101, meta 0x8359eff), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f43208c0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdf400
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.285558701s of 23.295845032s, submitted: 12
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdf400 session 0x55a1f2604fc0
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:29.810945+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4129327 data_alloc: 251658240 data_used: 37756244
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:30.811150+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:31.811337+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:32.811473+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:33.811651+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:34.811861+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4130223 data_alloc: 251658240 data_used: 37965652
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:35.812096+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:36.812248+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:37.812387+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:38.812626+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:39.812767+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4130223 data_alloc: 251658240 data_used: 37965652
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:40.812905+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:41.813079+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237035520 unmapped: 46383104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:42.813191+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.068505287s of 14.072545052s, submitted: 1
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237068288 unmapped: 46350336 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:43.813341+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:44.813469+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4140015 data_alloc: 251658240 data_used: 38981460
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:45.813690+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:46.813865+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:47.814128+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:48.814328+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:49.814488+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:07 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:07 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4140015 data_alloc: 251658240 data_used: 38981460
Dec 13 04:36:07 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:50.814666+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:51.814854+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:07 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:52.815004+0000)
Dec 13 04:36:07 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:07 compute-0 nova_compute[243704]: 2025-12-13 04:36:07.998 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:53.815217+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:54.815381+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4140015 data_alloc: 251658240 data_used: 38981460
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:55.815522+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:56.815687+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:57.815882+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:58.816060+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.442712784s of 15.451278687s, submitted: 12
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:28:59.816248+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4139263 data_alloc: 251658240 data_used: 38977364
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:00.816433+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:01.816666+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:02.816860+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:03.817186+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:04.817323+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4139263 data_alloc: 251658240 data_used: 38977364
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:05.817527+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:06.817725+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x96197, meta 0x8359e69), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237305856 unmapped: 46112768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f26fe380
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:07.817943+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f19c6540
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237314048 unmapped: 46104576 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets getting new tickets!
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:08.818277+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _finish_auth 0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:08.819682+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237322240 unmapped: 46096384 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:09.818452+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4139775 data_alloc: 251658240 data_used: 39083860
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237322240 unmapped: 46096384 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:10.818620+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237322240 unmapped: 46096384 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:11.818812+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef5c5000/0x0/0x4ffc00000, data 0x7fa8d59/0x8247000, compress 0x0/0x0/0x0, omap 0x9622d, meta 0x8359dd3), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 237322240 unmapped: 46096384 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:12.818970+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.217793465s of 14.228515625s, submitted: 6
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f43ba000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2156000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9622d, meta 0x8359dd3), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232939520 unmapped: 50479104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:13.819197+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232939520 unmapped: 50479104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:14.819406+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819607 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232939520 unmapped: 50479104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:15.819639+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2156000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9622d, meta 0x8359dd3), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232939520 unmapped: 50479104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:16.819836+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232939520 unmapped: 50479104 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:17.820014+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: mgrc ms_handle_reset ms_handle_reset con 0x55a1f3fd8000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3514601685
Dec 13 04:36:08 compute-0 ceph-osd[85653]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3514601685,v1:192.168.122.100:6801/3514601685]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: get_auth_request con 0x55a1f3fdf400 auth_method 0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: mgrc handle_mgr_configure stats_period=5
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232996864 unmapped: 50421760 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:18.820265+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232996864 unmapped: 50421760 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:19.820490+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819607 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232996864 unmapped: 50421760 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:20.820830+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232996864 unmapped: 50421760 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:21.821014+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f3bc88c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2156000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9622d, meta 0x8359dd3), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232996864 unmapped: 50421760 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:22.821243+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2156000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9622d, meta 0x8359dd3), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232980480 unmapped: 50438144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:23.821468+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232980480 unmapped: 50438144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:24.821673+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3819607 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232980480 unmapped: 50438144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:25.821817+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843400 session 0x55a1f43bae00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232980480 unmapped: 50438144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:26.822008+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 232980480 unmapped: 50438144 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:27.822187+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.170955658s of 15.208060265s, submitted: 21
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:28.822371+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:29.822604+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f0f7a800 session 0x55a1f43616c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826591 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:30.822834+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:31.823017+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:32.823209+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:33.823393+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:34.823570+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826591 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:35.823777+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:36.824014+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:37.824294+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:38.824499+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:39.824703+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826591 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:40.824893+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:41.825143+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:42.825339+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233242624 unmapped: 50176000 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:43.825534+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.703049660s of 15.723021507s, submitted: 10
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f41cc380
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f43a7340
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f2115000/0x0/0x4ffc00000, data 0x5457dbc/0x56f7000, compress 0x0/0x0/0x0, omap 0x964d0, meta 0x8359b30), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 250028032 unmapped: 33390592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:44.825678+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f20ad340
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050185 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:45.825855+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:46.826126+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:47.826300+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:48.826493+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:49.826654+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050185 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:50.826805+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:51.826994+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:52.827110+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:53.827301+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:54.827475+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050185 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:55.827661+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96773, meta 0x835988d), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.863139153s of 12.342876434s, submitted: 42
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f40136c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:56.827837+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:57.828017+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:58.828218+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:29:59.828370+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x83597f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 4050185 data_alloc: 234881024 data_used: 23362900
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:00.828592+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 233250816 unmapped: 50167808 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:01.828746+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 46776320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:02.834684+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236642304 unmapped: 46776320 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:03.834893+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 46907392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x83597f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:04.835387+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 46907392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4141833 data_alloc: 251658240 data_used: 35584852
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:05.835866+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 46907392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:06.837707+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 46907392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:07.838805+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236511232 unmapped: 46907392 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f413e000 session 0x55a1f422fc00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f0f7a800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ef75c000/0x0/0x4ffc00000, data 0x7e10d59/0x80af000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x83597f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:08.839389+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 46874624 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:09.840235+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 236544000 unmapped: 46874624 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4142473 data_alloc: 251658240 data_used: 35605332
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.384667397s of 14.390831947s, submitted: 1
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:10.840379+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248373248 unmapped: 35045376 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:11.841173+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246185984 unmapped: 37232640 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4ee588000/0x0/0x4ffc00000, data 0x7e15d59/0x80b4000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x94f97f7), peers [1,2] op hist [0,0,0,1])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:12.841302+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:13.841961+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:14.842591+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4226593 data_alloc: 251658240 data_used: 37193969
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:15.843107+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eda8a000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x94f97f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:16.843514+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:17.843667+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x94f97f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:18.843896+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:19.844109+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x94f97f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4217825 data_alloc: 251658240 data_used: 37193969
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:20.844642+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:21.844897+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247422976 unmapped: 35995648 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.960683823s of 11.389798164s, submitted: 138
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x96809, meta 0x94f97f7), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f41cc540
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:22.845033+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f26fe540
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:23.845471+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:24.845641+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4219569 data_alloc: 251658240 data_used: 37189873
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:25.845854+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:26.846076+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:27.846263+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9689f, meta 0x94f9761), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:28.846463+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:29.846664+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4219569 data_alloc: 251658240 data_used: 37189873
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:30.846864+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:31.847131+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9689f, meta 0x94f9761), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:32.847323+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:33.847695+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:34.847838+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4219569 data_alloc: 251658240 data_used: 37189873
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:35.848010+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9689f, meta 0x94f9761), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:36.848134+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:37.848323+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f43ba380
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9689f, meta 0x94f9761), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:38.848549+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f26fec40
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:39.848724+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9689f, meta 0x94f9761), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248561664 unmapped: 34856960 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843800 session 0x55a1f42676c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.112161636s of 18.128089905s, submitted: 13
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843800 session 0x55a1f39d2700
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4221276 data_alloc: 251658240 data_used: 37189873
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:40.848848+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248569856 unmapped: 34848768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:41.848974+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248569856 unmapped: 34848768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:42.849162+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248569856 unmapped: 34848768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:43.849347+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248578048 unmapped: 34840576 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:44.849539+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4221788 data_alloc: 251658240 data_used: 37644017
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:45.849691+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:46.849834+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:47.850115+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:48.850281+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:49.850467+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4221788 data_alloc: 251658240 data_used: 37644017
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:50.850600+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:51.850795+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:52.850933+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248791040 unmapped: 34627584 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:53.851139+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.759268761s of 13.775767326s, submitted: 8
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248840192 unmapped: 34578432 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:54.851322+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4230556 data_alloc: 251658240 data_used: 38307057
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:55.851516+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:56.851687+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:57.851884+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:58.852119+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:30:59.852277+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248913920 unmapped: 34504704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:00.852463+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4230540 data_alloc: 251658240 data_used: 38303985
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249036800 unmapped: 34381824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:01.852680+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249036800 unmapped: 34381824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:02.852886+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249036800 unmapped: 34381824 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:03.853138+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:04.853381+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:05.853595+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4230540 data_alloc: 251658240 data_used: 38303985
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.503009796s of 12.163004875s, submitted: 22
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:06.853838+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:07.854112+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:08.854271+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249044992 unmapped: 34373632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:09.854573+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:10.854965+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4230236 data_alloc: 251658240 data_used: 38299889
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:11.855405+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:12.855733+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:13.856059+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:14.856255+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:15.856455+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4229228 data_alloc: 251658240 data_used: 38299889
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:16.856745+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:17.856996+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:18.857224+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:19.857352+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:20.857507+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4229228 data_alloc: 251658240 data_used: 38299889
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249061376 unmapped: 34357248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.667620659s of 15.554781914s, submitted: 14
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4220fc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:21.857788+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f39d36c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f732e700
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edae9000/0x0/0x4ffc00000, data 0x88e3d69/0x8b83000, compress 0x0/0x0/0x0, omap 0x96c9f, meta 0x94f9361), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:22.857987+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:23.858280+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:24.858504+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:25.858644+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4229635 data_alloc: 251658240 data_used: 38512881
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:26.858792+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:27.858987+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:28.859202+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4edaea000/0x0/0x4ffc00000, data 0x88e3d59/0x8b82000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1fbdd6700
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:29.859425+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239665152 unmapped: 43753472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:30.859581+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239665152 unmapped: 43753472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:31.859770+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 239665152 unmapped: 43753472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:32.859959+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:33.860313+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:34.860470+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:35.860681+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:36.860842+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:37.861006+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:38.861131+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:39.861413+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:40.861586+0000)
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19322 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:41.861750+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:42.861914+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:43.862106+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:44.862305+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:45.862454+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:46.862615+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:47.862766+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:48.863004+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:49.863237+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:50.863349+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:51.863536+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:52.863744+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:53.864004+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:54.864896+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:55.865097+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:56.865239+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:57.865361+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:58.865514+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:31:59.865692+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:00.865802+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:01.865941+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:02.866136+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:03.866345+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:04.866496+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:05.866669+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:06.866882+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:07.867095+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:08.867235+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:09.867399+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:10.867565+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:11.867783+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:12.868001+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:13.868284+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:14.868665+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:15.868841+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:16.868995+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:17.869130+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:18.869323+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:19.869477+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f0fb6000/0x0/0x4ffc00000, data 0x5417d59/0x56b6000, compress 0x0/0x0/0x0, omap 0x9708d, meta 0x94f8f73), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:20.869657+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3843775 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238714880 unmapped: 44703744 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:21.869805+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f26ffdc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4220000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843800 session 0x55a1f19c7340
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f4321340
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 60.483623505s of 60.600273132s, submitted: 41
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 244547584 unmapped: 38871040 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9800 session 0x55a1f396ea80
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f39d3c00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843800 session 0x55a1f41d1500
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f978ac40
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1fbdd7880
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:22.869924+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:23.870133+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:24.870276+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:25.870461+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f453e000 session 0x55a1fbdd7c00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3909452 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f05f2000/0x0/0x4ffc00000, data 0x5dd9dcb/0x607a000, compress 0x0/0x0/0x0, omap 0x97123, meta 0x94f8edd), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f453e000 session 0x55a1fbdd7340
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:26.870597+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1842400 session 0x55a1f8e82540
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 ms_handle_reset con 0x55a1f1843800 session 0x55a1f732fa40
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:27.874107+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fa9400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:28.874271+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:29.874389+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:30.874526+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3967436 data_alloc: 251658240 data_used: 29489393
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f05f2000/0x0/0x4ffc00000, data 0x5dd9dcb/0x607a000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0x94f8e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:31.874686+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:32.874816+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f05f2000/0x0/0x4ffc00000, data 0x5dd9dcb/0x607a000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0x94f8e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:33.874969+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:34.875158+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4f05f2000/0x0/0x4ffc00000, data 0x5dd9dcb/0x607a000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0x94f8e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:35.875395+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3967436 data_alloc: 251658240 data_used: 29489393
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:36.875570+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:37.875711+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:38.875859+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 238510080 unmapped: 44908544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:39.876002+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.492284775s of 17.667293549s, submitted: 37
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246145024 unmapped: 37273600 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:40.876188+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4029130 data_alloc: 251658240 data_used: 29986033
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eeb39000/0x0/0x4ffc00000, data 0x66eadcb/0x698b000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [0,0,1])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246808576 unmapped: 36610048 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:41.876382+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eeab4000/0x0/0x4ffc00000, data 0x676fdcb/0x6a10000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246808576 unmapped: 36610048 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:42.876539+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eeab4000/0x0/0x4ffc00000, data 0x676fdcb/0x6a10000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246808576 unmapped: 36610048 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:43.876729+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246808576 unmapped: 36610048 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:44.877005+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246808576 unmapped: 36610048 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:45.877252+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4040438 data_alloc: 251658240 data_used: 30305521
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:46.877396+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:47.877518+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eea9a000/0x0/0x4ffc00000, data 0x6791dcb/0x6a32000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eea9a000/0x0/0x4ffc00000, data 0x6791dcb/0x6a32000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:48.877710+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:49.877925+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:50.878108+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4034278 data_alloc: 251658240 data_used: 30309617
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:51.878267+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.390715599s of 12.716364861s, submitted: 111
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:52.878410+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eea95000/0x0/0x4ffc00000, data 0x6796dcb/0x6a37000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245112832 unmapped: 38305792 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:53.878626+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 heartbeat osd_stat(store_statfs(0x4eea0a000/0x0/0x4ffc00000, data 0x6821dcb/0x6ac2000, compress 0x0/0x0/0x0, omap 0x971b9, meta 0xa698e47), peers [1,2] op hist [0,1])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245145600 unmapped: 38273024 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:54.878763+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245145600 unmapped: 38273024 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:55.878917+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 485 handle_osd_map epochs [486,486], i have 485, src has [1,486]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4050086 data_alloc: 251658240 data_used: 30403825
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245145600 unmapped: 38273024 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:56.879349+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 486 heartbeat osd_stat(store_statfs(0x4ee9f9000/0x0/0x4ffc00000, data 0x682f967/0x6ad1000, compress 0x0/0x0/0x0, omap 0x972b9, meta 0xa698d47), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245448704 unmapped: 37969920 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:57.879555+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 486 handle_osd_map epochs [487,487], i have 486, src has [1,487]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245563392 unmapped: 37855232 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:58.879732+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 245563392 unmapped: 37855232 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:32:59.879912+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:00.880139+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4078668 data_alloc: 251658240 data_used: 30420209
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 487 heartbeat osd_stat(store_statfs(0x4ee9ac000/0x0/0x4ffc00000, data 0x6981503/0x6b1a000, compress 0x0/0x0/0x0, omap 0x97b0c, meta 0xa6984f4), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 487 handle_osd_map epochs [488,488], i have 487, src has [1,488]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 487 handle_osd_map epochs [488,488], i have 488, src has [1,488]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:01.880321+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee9ac000/0x0/0x4ffc00000, data 0x6981503/0x6b1a000, compress 0x0/0x0/0x0, omap 0x97b0c, meta 0xa6984f4), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f196d180
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:02.880463+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:03.880639+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.233141899s of 12.032355309s, submitted: 57
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:04.880805+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee9a7000/0x0/0x4ffc00000, data 0x698309f/0x6b1d000, compress 0x0/0x0/0x0, omap 0x981c0, meta 0xa697e40), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:05.880964+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee9aa000/0x0/0x4ffc00000, data 0x698809f/0x6b22000, compress 0x0/0x0/0x0, omap 0x981c0, meta 0xa697e40), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4079146 data_alloc: 251658240 data_used: 30420209
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:06.881077+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246702080 unmapped: 36716544 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:07.881222+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246833152 unmapped: 36585472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:08.881367+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246833152 unmapped: 36585472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:09.881522+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f453e800 session 0x55a1f732f880
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:10.881665+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee9a3000/0x0/0x4ffc00000, data 0x698f09f/0x6b29000, compress 0x0/0x0/0x0, omap 0x981c0, meta 0xa697e40), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4082170 data_alloc: 251658240 data_used: 31022321
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:11.881797+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:12.881949+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:13.882112+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:14.882260+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 246898688 unmapped: 36519936 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:15.882419+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4083314 data_alloc: 251658240 data_used: 31034609
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee9a1000/0x0/0x4ffc00000, data 0x699009f/0x6b2a000, compress 0x0/0x0/0x0, omap 0x981c0, meta 0xa697e40), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247357440 unmapped: 36061184 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:16.898288+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.018300056s of 12.086741447s, submitted: 7
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee996000/0x0/0x4ffc00000, data 0x699c09f/0x6b36000, compress 0x0/0x0/0x0, omap 0x981c0, meta 0xa697e40), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:17.898581+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247472128 unmapped: 35946496 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f1842400 session 0x55a1f4013880
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:18.899014+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247472128 unmapped: 35946496 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:19.899253+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247406592 unmapped: 36012032 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f1843800 session 0x55a1f4267a40
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee991000/0x0/0x4ffc00000, data 0x69a109f/0x6b3b000, compress 0x0/0x0/0x0, omap 0x98550, meta 0xa697ab0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:20.899417+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4088496 data_alloc: 251658240 data_used: 31730929
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:21.899580+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:22.899778+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee991000/0x0/0x4ffc00000, data 0x69a109f/0x6b3b000, compress 0x0/0x0/0x0, omap 0x98550, meta 0xa697ab0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:23.900187+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:24.900423+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247414784 unmapped: 36003840 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:25.900591+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248569856 unmapped: 34848768 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4098983 data_alloc: 251658240 data_used: 32320753
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f422fdc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:26.900771+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248578048 unmapped: 34840576 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:27.900900+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248578048 unmapped: 34840576 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.892848015s of 11.054639816s, submitted: 29
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f453e000 session 0x55a1f978bc00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fabc00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f3fabc00 session 0x55a1f24ffc00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee98c000/0x0/0x4ffc00000, data 0x69a609f/0x6b40000, compress 0x0/0x0/0x0, omap 0x98382, meta 0xa697c7e), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:28.901032+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248037376 unmapped: 35381248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f1842400 session 0x55a1f45f2fc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f1843800 session 0x55a1f43216c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:29.901261+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248037376 unmapped: 35381248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:30.901398+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f4393c00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 248037376 unmapped: 35381248 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f453e000 session 0x55a1f24ffa40
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4090531 data_alloc: 251658240 data_used: 32320753
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 heartbeat osd_stat(store_statfs(0x4ee998000/0x0/0x4ffc00000, data 0x699a09f/0x6b34000, compress 0x0/0x0/0x0, omap 0x98382, meta 0xa697c7e), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:31.901565+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249085952 unmapped: 34332672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f4000000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 ms_handle_reset con 0x55a1f4000000 session 0x55a1f4221880
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 488 handle_osd_map epochs [489,489], i have 488, src has [1,489]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:32.901749+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249085952 unmapped: 34332672 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1842400
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 489 ms_handle_reset con 0x55a1f1842400 session 0x55a1f24ff6c0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 489 handle_osd_map epochs [489,490], i have 489, src has [1,490]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:33.901912+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 249151488 unmapped: 34267136 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f1843800
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 490 ms_handle_reset con 0x55a1f1843800 session 0x55a1f4392a80
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 490 handle_osd_map epochs [491,491], i have 490, src has [1,491]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:34.902032+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247242752 unmapped: 36175872 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 heartbeat osd_stat(store_statfs(0x4ee9aa000/0x0/0x4ffc00000, data 0x687846f/0x6b20000, compress 0x0/0x0/0x0, omap 0x98df7, meta 0xa697209), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:35.902176+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247242752 unmapped: 36175872 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f3fdec00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 ms_handle_reset con 0x55a1f3fdec00 session 0x55a1f41e1dc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 heartbeat osd_stat(store_statfs(0x4ee9aa000/0x0/0x4ffc00000, data 0x687846f/0x6b20000, compress 0x0/0x0/0x0, omap 0x98df7, meta 0xa697209), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4065139 data_alloc: 251658240 data_used: 30399729
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:36.902309+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 247242752 unmapped: 36175872 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 ms_handle_reset con 0x55a1f3fa9400 session 0x55a1f4220fc0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: handle_auth_request added challenge on 0x55a1f453e000
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:37.902453+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241713152 unmapped: 41705472 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.853893280s of 10.052185059s, submitted: 113
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 ms_handle_reset con 0x55a1f453e000 session 0x55a1f4360e00
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:38.902597+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 491 handle_osd_map epochs [492,492], i have 491, src has [1,492]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:39.902815+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:40.903004+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3891377 data_alloc: 234881024 data_used: 19684593
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:41.903162+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 492 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5423e98/0x56cb000, compress 0x0/0x0/0x0, omap 0x990c2, meta 0xa696f3e), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:42.903355+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:43.903842+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:44.903983+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241770496 unmapped: 41648128 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 492 handle_osd_map epochs [493,493], i have 492, src has [1,493]
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:45.904122+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5423e98/0x56cb000, compress 0x0/0x0/0x0, omap 0x990c2, meta 0xa696f3e), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:46.904249+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:47.904388+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:48.904509+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:49.904753+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:50.904926+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:51.905125+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:52.905302+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:53.905553+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:54.905751+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _renew_subs
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:55.905895+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:56.906112+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:57.906239+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:58.906431+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:33:59.906603+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:00.906799+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:01.906969+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:02.907150+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:03.907339+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:04.907510+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:05.907684+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:06.907852+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:07.908101+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 3000.1 total, 600.0 interval
                                           Cumulative writes: 33K writes, 123K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s
                                           Cumulative WAL: 33K writes, 12K syncs, 2.63 writes per sync, written: 0.09 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4327 writes, 12K keys, 4327 commit groups, 1.0 writes per commit group, ingest: 12.35 MB, 0.02 MB/s
                                           Interval WAL: 4327 writes, 1891 syncs, 2.29 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:08.908237+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:09.908702+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:10.908907+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:11.909144+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:12.909274+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:13.909463+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:14.909668+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:15.909827+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:16.910107+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:17.910354+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:18.910566+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:19.910758+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:20.910930+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:21.911100+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:22.911251+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:23.911501+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:24.911677+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:25.911833+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:26.911993+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:27.912186+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:28.912328+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:29.912497+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:30.912642+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:31.912748+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:32.912890+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:33.913143+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:34.913324+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:35.913476+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:36.913604+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:37.913774+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:38.913949+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:39.914133+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:40.914260+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:41.914451+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:42.915313+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:43.915488+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:44.915768+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:45.915913+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:46.916079+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:47.916252+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:48.916438+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:49.916789+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:50.916935+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:51.917109+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:52.917217+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:53.917362+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:54.917486+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:55.917625+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:56.917773+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:57.917909+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:58.918109+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:34:59.918335+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:00.918478+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:01.918683+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:02.918911+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:03.919153+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:04.919298+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:05.919450+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:06.919610+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:07.919773+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:08.919922+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:09.920109+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:10.920248+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3894087 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:11.920385+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfc000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:12.920523+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:13.920687+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:14.920831+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241827840 unmapped: 41590784 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:15.921024+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 98.030906677s of 98.047622681s, submitted: 27
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241836032 unmapped: 41582592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893367 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:16.921288+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241836032 unmapped: 41582592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:17.921448+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241836032 unmapped: 41582592 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:18.921609+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:19.921749+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:20.921878+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893367 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:21.922055+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:22.922188+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:23.922339+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:24.922488+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:25.922632+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893367 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:26.922768+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:27.922886+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:28.923120+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:29.923278+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:30.923924+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893367 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:31.924062+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:32.924276+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241876992 unmapped: 41541632 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:33.924487+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: osd.0 493 heartbeat osd_stat(store_statfs(0x4efdfe000/0x0/0x4ffc00000, data 0x5425917/0x56ce000, compress 0x0/0x0/0x0, omap 0x9bd20, meta 0xa6942e0), peers [1,2] op hist [])
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241909760 unmapped: 41508864 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:34.924626+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'config diff' '{prefix=config diff}'
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'config show' '{prefix=config show}'
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 242098176 unmapped: 41320448 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'counter dump' '{prefix=counter dump}'
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'counter schema' '{prefix=counter schema}'
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:35.924744+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241942528 unmapped: 41476096 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: tick
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_tickets
Dec 13 04:36:08 compute-0 ceph-osd[85653]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-12-13T04:35:36.924953+0000)
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Dec 13 04:36:08 compute-0 ceph-osd[85653]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Dec 13 04:36:08 compute-0 ceph-osd[85653]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3893367 data_alloc: 234881024 data_used: 19688591
Dec 13 04:36:08 compute-0 ceph-osd[85653]: prioritycache tune_memory target: 4294967296 mapped: 241745920 unmapped: 41672704 heap: 283418624 old mem: 2845415832 new mem: 2845415832
Dec 13 04:36:08 compute-0 ceph-osd[85653]: do_command 'log dump' '{prefix=log dump}'
Dec 13 04:36:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3483980473' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Dec 13 04:36:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2753953249' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Dec 13 04:36:08 compute-0 ceph-mon[75071]: from='client.19318 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:08 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/535808673' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19324 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19326 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} v 0)
Dec 13 04:36:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19328 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:08 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19330 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:08 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} v 0)
Dec 13 04:36:08 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: from='client.19322 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: pgmap v2056: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:09 compute-0 ceph-mon[75071]: from='client.19324 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: from='client.19326 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: from='mgr.14122 192.168.122.100:0/2079870021' entity='mgr.compute-0.gsxkyu' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.gnpexe", "name": "rgw_frontends"} : dispatch
Dec 13 04:36:09 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19334 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:09 compute-0 nova_compute[243704]: 2025-12-13 04:36:09.496 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:09 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:09 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Dec 13 04:36:09 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2707282921' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 04:36:10 compute-0 ceph-mon[75071]: from='client.19328 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:10 compute-0 ceph-mon[75071]: from='client.19330 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:10 compute-0 ceph-mon[75071]: from='client.19334 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:10 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2707282921' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Dec 13 04:36:10 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:10 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:10 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Dec 13 04:36:10 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514241138' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 04:36:10 compute-0 systemd[1]: Starting Hostname Service...
Dec 13 04:36:10 compute-0 systemd[1]: Started Hostname Service.
Dec 13 04:36:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/191663648' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: from='client.19338 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: pgmap v2057: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:11 compute-0 ceph-mon[75071]: from='client.19340 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1514241138' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/191663648' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2367820988' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 04:36:11 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 04:36:12 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2367820988' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Dec 13 04:36:12 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Dec 13 04:36:12 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Dec 13 04:36:12 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Dec 13 04:36:12 compute-0 ceph-mon[75071]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:12 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Dec 13 04:36:12 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3519577259' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] scanning for idle connections..
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: [volumes INFO mgr_util] cleaning up connections: []
Dec 13 04:36:12 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19358 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:12 compute-0 podman[289482]: 2025-12-13 04:36:12.911495706 +0000 UTC m=+0.061842685 container health_status e9e5ee0c1468d7b29e75f2fa6fc4d1c731dd8d3f93ae5cb5a0294875b08e7670 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Dec 13 04:36:13 compute-0 nova_compute[243704]: 2025-12-13 04:36:13.000 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:13 compute-0 ceph-mon[75071]: pgmap v2058: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:13 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3519577259' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Dec 13 04:36:13 compute-0 ceph-mon[75071]: from='client.19358 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Dec 13 04:36:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3939094926' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 04:36:13 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Dec 13 04:36:13 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069799019' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 04:36:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3939094926' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Dec 13 04:36:14 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/3069799019' entity='client.admin' cmd={"prefix": "df"} : dispatch
Dec 13 04:36:14 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Dec 13 04:36:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/798930251' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 04:36:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader).osd e493 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Dec 13 04:36:14 compute-0 nova_compute[243704]: 2025-12-13 04:36:14.549 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:14 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Dec 13 04:36:14 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356365683' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 04:36:14 compute-0 podman[289649]: 2025-12-13 04:36:14.910156408 +0000 UTC m=+0.096981603 container health_status b542f85f9a65ba28ae9070baa96a6bd69fe3bf64b8688cbd0bcbd95a0a25d562 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec 13 04:36:15 compute-0 ceph-mon[75071]: pgmap v2059: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/798930251' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Dec 13 04:36:15 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1356365683' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Dec 13 04:36:15 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19368 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:15 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Dec 13 04:36:15 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1026804204' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 04:36:16 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:16 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Dec 13 04:36:16 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174541255' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 04:36:16 compute-0 ceph-mon[75071]: from='client.19368 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:16 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/1026804204' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Dec 13 04:36:17 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19374 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:17 compute-0 ceph-mon[75071]: pgmap v2060: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:17 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/174541255' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Dec 13 04:36:18 compute-0 nova_compute[243704]: 2025-12-13 04:36:18.002 243708 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Dec 13 04:36:18 compute-0 ceph-mon[75071]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Dec 13 04:36:18 compute-0 ceph-mon[75071]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2606783292' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 04:36:18 compute-0 ceph-mgr[75360]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:18 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19378 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:18 compute-0 ceph-mgr[75360]: log_channel(audit) log [DBG] : from='client.19380 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:18 compute-0 ceph-mon[75071]: from='client.19374 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Dec 13 04:36:18 compute-0 ceph-mon[75071]: from='client.? 192.168.122.100:0/2606783292' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Dec 13 04:36:18 compute-0 ceph-mon[75071]: pgmap v2061: 305 pgs: 305 active+clean; 271 MiB data, 650 MiB used, 59 GiB / 60 GiB avail
Dec 13 04:36:18 compute-0 ceph-mon[75071]: from='client.19378 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
